Jan 23 11:52:34 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 11:52:34 crc restorecon[4589]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:34 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:35 crc restorecon[4589]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 11:52:35 crc restorecon[4589]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 11:52:35 crc kubenswrapper[4865]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 11:52:35 crc kubenswrapper[4865]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 11:52:35 crc kubenswrapper[4865]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 11:52:35 crc kubenswrapper[4865]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 11:52:35 crc kubenswrapper[4865]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 11:52:35 crc kubenswrapper[4865]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.953400 4865 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956816 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956846 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956852 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956856 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956862 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956867 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956872 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956876 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956882 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956887 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956892 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956898 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956904 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956909 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956913 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956917 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956921 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956925 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956928 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956932 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956936 4865 feature_gate.go:330] unrecognized feature gate: Example Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956940 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956944 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956948 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956952 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956956 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956960 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956965 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956971 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956983 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956988 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956992 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.956996 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957000 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957004 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957009 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957014 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957019 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957023 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957027 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957030 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957035 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957039 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957042 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957046 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957049 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957053 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957057 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957062 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957066 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957070 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957074 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957078 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957081 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957085 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957088 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957091 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957095 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957099 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957103 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957106 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957110 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957113 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957117 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957120 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957129 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957132 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957136 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957139 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957145 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.957149 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957243 4865 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957255 4865 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957267 4865 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957273 4865 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957280 4865 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957284 4865 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957290 4865 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957296 4865 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957301 4865 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957305 4865 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957309 4865 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957314 4865 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957319 4865 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957324 4865 flags.go:64] FLAG: --cgroup-root="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957328 4865 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957332 4865 flags.go:64] FLAG: --client-ca-file="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957337 4865 flags.go:64] FLAG: --cloud-config="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957341 4865 flags.go:64] FLAG: --cloud-provider="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957345 4865 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957354 4865 flags.go:64] FLAG: --cluster-domain="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957358 4865 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957362 4865 flags.go:64] FLAG: --config-dir="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957366 4865 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957371 4865 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957377 4865 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957381 4865 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957385 4865 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957389 4865 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957394 4865 flags.go:64] FLAG: --contention-profiling="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957398 4865 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957409 4865 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957413 4865 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957417 4865 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957423 4865 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957427 4865 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957431 4865 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957436 4865 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957439 4865 flags.go:64] FLAG: --enable-server="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957444 4865 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957449 4865 flags.go:64] FLAG: --event-burst="100" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957453 4865 flags.go:64] FLAG: --event-qps="50" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957457 4865 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957462 4865 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957466 4865 flags.go:64] FLAG: --eviction-hard="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957471 4865 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957476 4865 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957480 4865 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957485 4865 flags.go:64] FLAG: --eviction-soft="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957489 4865 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957493 4865 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957497 4865 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957501 4865 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957505 4865 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957509 4865 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957513 4865 flags.go:64] FLAG: --feature-gates="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957518 4865 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957522 4865 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957527 4865 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957531 4865 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957535 4865 flags.go:64] FLAG: --healthz-port="10248" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957539 4865 flags.go:64] FLAG: --help="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957543 4865 flags.go:64] FLAG: --hostname-override="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957548 4865 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957553 4865 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957557 4865 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957563 4865 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957568 4865 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957572 4865 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957576 4865 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957581 4865 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957585 4865 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957589 4865 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957593 4865 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957612 4865 flags.go:64] FLAG: --kube-reserved="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957617 4865 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957621 4865 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957625 4865 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957629 4865 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957633 4865 flags.go:64] FLAG: --lock-file="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957637 4865 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957641 4865 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957646 4865 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957652 4865 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957656 4865 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957660 4865 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957665 4865 flags.go:64] FLAG: --logging-format="text" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957669 4865 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957673 4865 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957677 4865 flags.go:64] FLAG: --manifest-url="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957681 4865 flags.go:64] FLAG: --manifest-url-header="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957687 4865 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957691 4865 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957696 4865 flags.go:64] FLAG: --max-pods="110" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957700 4865 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957705 4865 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957709 4865 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957714 4865 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957719 4865 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957724 4865 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957729 4865 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957740 4865 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957744 4865 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957750 4865 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957754 4865 flags.go:64] FLAG: --pod-cidr="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957758 4865 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957765 4865 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957769 4865 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957774 4865 flags.go:64] FLAG: --pods-per-core="0" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957778 4865 flags.go:64] FLAG: --port="10250" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957782 4865 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957787 4865 flags.go:64] FLAG: --provider-id="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957791 4865 flags.go:64] FLAG: --qos-reserved="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957795 4865 flags.go:64] FLAG: --read-only-port="10255" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957799 4865 flags.go:64] FLAG: --register-node="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957803 4865 flags.go:64] FLAG: --register-schedulable="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957807 4865 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957815 4865 flags.go:64] FLAG: --registry-burst="10" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957819 4865 flags.go:64] FLAG: --registry-qps="5" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957824 4865 flags.go:64] FLAG: --reserved-cpus="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957828 4865 flags.go:64] FLAG: --reserved-memory="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957835 4865 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957840 4865 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957844 4865 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957849 4865 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957854 4865 flags.go:64] FLAG: --runonce="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957858 4865 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957866 4865 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957870 4865 flags.go:64] FLAG: --seccomp-default="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957875 4865 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957879 4865 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957884 4865 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957888 4865 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957893 4865 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957897 4865 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957901 4865 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957905 4865 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957909 4865 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957913 4865 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957918 4865 flags.go:64] FLAG: --system-cgroups="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957922 4865 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957929 4865 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957933 4865 flags.go:64] FLAG: --tls-cert-file="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957937 4865 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957942 4865 flags.go:64] FLAG: --tls-min-version="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957946 4865 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957950 4865 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957954 4865 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957958 4865 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957962 4865 flags.go:64] FLAG: --v="2" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957968 4865 flags.go:64] FLAG: --version="false" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957975 4865 flags.go:64] FLAG: --vmodule="" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957980 4865 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.957985 4865 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958107 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958112 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958116 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958121 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958125 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958129 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958132 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958136 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958141 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958145 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958149 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958153 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958159 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958163 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958167 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958170 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958174 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958178 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958181 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958185 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958188 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958197 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958200 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958204 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958207 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958211 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958215 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958219 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958222 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958226 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958229 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958233 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958236 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958240 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958244 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958248 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958252 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958255 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958259 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958262 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958265 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958270 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958274 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958278 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958283 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958286 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958290 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958294 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958298 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958304 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958309 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958312 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958317 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958322 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958328 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958333 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958337 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958351 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958356 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958360 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958363 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958367 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958371 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958375 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958378 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958382 4865 feature_gate.go:330] unrecognized feature gate: Example Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958387 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958393 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958398 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958404 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.958409 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.958614 4865 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.970707 4865 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.970764 4865 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.970909 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.970930 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.970940 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.970951 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.970960 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.970968 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.970976 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.970984 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.970993 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971001 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971012 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971023 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971032 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971041 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971048 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971057 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971064 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971073 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971081 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971090 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971098 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971106 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971114 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971121 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971130 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971138 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971148 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971156 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971164 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971172 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971180 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971188 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971197 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971204 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971212 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971220 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971228 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971236 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971244 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971254 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971268 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971277 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971287 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971296 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971305 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971314 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971322 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971330 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971338 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971345 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971355 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971363 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971370 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971378 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971389 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971399 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971409 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971417 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971426 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971438 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971449 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971458 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971466 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971475 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971483 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971491 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971499 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971536 4865 feature_gate.go:330] unrecognized feature gate: Example Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971545 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971553 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971562 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.971576 4865 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971842 4865 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971859 4865 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971869 4865 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971878 4865 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971886 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971893 4865 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971902 4865 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971910 4865 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971918 4865 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971926 4865 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971934 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971942 4865 feature_gate.go:330] unrecognized feature gate: Example Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971950 4865 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971957 4865 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971969 4865 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971980 4865 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971989 4865 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.971998 4865 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972007 4865 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972016 4865 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972024 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972033 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972040 4865 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972078 4865 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972090 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972101 4865 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972111 4865 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972120 4865 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972129 4865 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972138 4865 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972146 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972155 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972163 4865 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972171 4865 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972180 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972188 4865 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972196 4865 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972206 4865 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972214 4865 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972222 4865 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972231 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972238 4865 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972246 4865 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972254 4865 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972264 4865 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972274 4865 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972282 4865 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972291 4865 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972299 4865 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972310 4865 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972321 4865 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972330 4865 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972341 4865 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972350 4865 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972358 4865 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972367 4865 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972375 4865 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972383 4865 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972392 4865 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972400 4865 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972407 4865 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972415 4865 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972423 4865 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972431 4865 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972438 4865 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972447 4865 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972455 4865 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972463 4865 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972470 4865 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972478 4865 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 11:52:35 crc kubenswrapper[4865]: W0123 11:52:35.972486 4865 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.972497 4865 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.973099 4865 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.977458 4865 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.977637 4865 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.978452 4865 server.go:997] "Starting client certificate rotation" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.978487 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.978942 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-09 20:19:52.590874992 +0000 UTC Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.979079 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.987623 4865 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 11:52:35 crc kubenswrapper[4865]: E0123 11:52:35.990251 4865 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.990402 4865 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 11:52:35 crc kubenswrapper[4865]: I0123 11:52:35.997934 4865 log.go:25] "Validated CRI v1 runtime API" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.013914 4865 log.go:25] "Validated CRI v1 image API" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.015430 4865 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.019641 4865 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-11-47-15-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.019689 4865 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.034336 4865 manager.go:217] Machine: {Timestamp:2026-01-23 11:52:36.033310766 +0000 UTC m=+0.202383012 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199464448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:bb0a19dc-3efc-4874-8a0b-6a80f91a629b BootID:fc8e73b9-5731-4055-8f0b-defdec7b14e0 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039894528 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599734272 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076106 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599730176 Type:vfs Inodes:3076106 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0e:41:e1 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0e:41:e1 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e4:d6:9a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d6:f7:73 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1a:87:e0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:8f:dc:60 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:02:ad:87:60:09:1c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ba:26:f7:07:9b:03 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199464448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.034524 4865 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.034760 4865 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.035904 4865 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.036229 4865 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.036279 4865 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.036651 4865 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.036670 4865 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.037020 4865 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.037075 4865 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.037327 4865 state_mem.go:36] "Initialized new in-memory state store" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.037468 4865 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.038360 4865 kubelet.go:418] "Attempting to sync node with API server" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.038392 4865 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.038434 4865 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.038466 4865 kubelet.go:324] "Adding apiserver pod source" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.038485 4865 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 11:52:36 crc kubenswrapper[4865]: W0123 11:52:36.040483 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.040643 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.040640 4865 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 11:52:36 crc kubenswrapper[4865]: W0123 11:52:36.040715 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.040817 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.041317 4865 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.042986 4865 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044172 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044217 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044234 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044249 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044273 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044287 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044307 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044330 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044347 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044361 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044422 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.044448 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.045000 4865 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.045933 4865 server.go:1280] "Started kubelet" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.046451 4865 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.047326 4865 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.047757 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d59f955b0e97e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 11:52:36.045859198 +0000 UTC m=+0.214931464,LastTimestamp:2026-01-23 11:52:36.045859198 +0000 UTC m=+0.214931464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 11:52:36 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.048642 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.050889 4865 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.054065 4865 server.go:460] "Adding debug handlers to kubelet server" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.056810 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.056860 4865 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.057050 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 11:24:22.006618797 +0000 UTC Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.057291 4865 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.057325 4865 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.057472 4865 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.057651 4865 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.057990 4865 factory.go:55] Registering systemd factory Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.058062 4865 factory.go:221] Registration of the systemd container factory successfully Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.058720 4865 factory.go:153] Registering CRI-O factory Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.058754 4865 factory.go:221] Registration of the crio container factory successfully Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.058865 4865 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.060938 4865 factory.go:103] Registering Raw factory Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.060970 4865 manager.go:1196] Started watching for new ooms in manager Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.059746 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Jan 23 11:52:36 crc kubenswrapper[4865]: W0123 11:52:36.059316 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.061713 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.065159 4865 manager.go:319] Starting recovery of all containers Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067149 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067217 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067233 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067245 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067257 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067268 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067278 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067290 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067304 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067315 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067326 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067338 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067364 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067379 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067391 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067408 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067420 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067431 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067444 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067456 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067470 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067484 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067497 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067524 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067537 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067549 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067566 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067579 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067900 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067914 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067926 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067938 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067951 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067963 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067975 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067986 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.067997 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068009 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068020 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068031 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068042 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068053 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068065 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068076 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068103 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068116 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068128 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068140 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068152 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068163 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068173 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068185 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068203 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068216 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068228 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068243 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068254 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068264 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068276 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068288 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068299 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068317 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068328 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068339 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068349 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068360 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068371 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068382 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068394 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068405 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068416 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068427 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068438 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068447 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068458 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068468 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068480 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068490 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068501 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068515 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068527 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068539 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068550 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068562 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068574 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068586 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068637 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068652 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068663 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068674 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068686 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068699 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068712 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068725 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068738 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068752 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068764 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068777 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068791 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068804 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068817 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068831 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068844 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068856 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068876 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068890 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068903 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.068916 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070473 4865 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070668 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070716 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070761 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070787 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070816 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070839 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070864 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070895 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070917 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070939 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070966 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.070990 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071014 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071035 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071059 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071081 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071102 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071122 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071161 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071211 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071237 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071259 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071281 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071303 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071324 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071350 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071377 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071400 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071424 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071447 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071468 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071489 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071515 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071537 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071561 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071581 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071634 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071658 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071681 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071704 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071724 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071745 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071766 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071789 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071814 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071835 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071857 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071878 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071899 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071921 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071945 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071965 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.071985 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072007 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072028 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072048 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072068 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072089 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072113 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072133 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072153 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072175 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072195 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072219 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072241 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072261 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072285 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072306 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072325 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072346 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072367 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072387 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072407 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072428 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072447 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072469 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072494 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072521 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072549 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072577 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072642 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072676 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072709 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072730 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072754 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072774 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072831 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072867 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072898 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072941 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.072981 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.073002 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.073024 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.073044 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.073067 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.073098 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.073126 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.073156 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.073189 4865 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.073216 4865 reconstruct.go:97] "Volume reconstruction finished" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.073236 4865 reconciler.go:26] "Reconciler: start to sync state" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.106943 4865 manager.go:324] Recovery completed Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.114085 4865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.116561 4865 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.116806 4865 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.116845 4865 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.116895 4865 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 11:52:36 crc kubenswrapper[4865]: W0123 11:52:36.117882 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.117962 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.119405 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.120867 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.120906 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.120918 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.121973 4865 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.121988 4865 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.122010 4865 state_mem.go:36] "Initialized new in-memory state store" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.132013 4865 policy_none.go:49] "None policy: Start" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.132651 4865 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.132687 4865 state_mem.go:35] "Initializing new in-memory state store" Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.158513 4865 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.187756 4865 manager.go:334] "Starting Device Plugin manager" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.187829 4865 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.187845 4865 server.go:79] "Starting device plugin registration server" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.188454 4865 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.188474 4865 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.188648 4865 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.188753 4865 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.188799 4865 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.202799 4865 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.217198 4865 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.217301 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.218187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.218217 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.218226 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.218343 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.218493 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.218534 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.219176 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.219218 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.219230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.219365 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.219472 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.219500 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.219999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220340 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220577 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220688 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220824 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.220885 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.221489 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.221507 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.221515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.221595 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.221761 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.221820 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223109 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223141 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223359 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223369 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.223513 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.224355 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.224403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.224414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.261980 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276074 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276166 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276202 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276223 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276238 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276254 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276354 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276422 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276466 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276489 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276553 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276646 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276672 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276726 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.276784 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.289317 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.290336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.290370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.290379 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.290403 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.290908 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.377829 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.377873 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.377892 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.377905 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.377919 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.377935 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.377950 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.377964 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.377982 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.377998 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378013 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378027 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378040 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378055 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378069 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378443 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378496 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378515 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378532 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378550 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378586 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378636 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378657 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378674 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378699 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378720 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378744 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378767 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378785 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.378805 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.491460 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.493108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.493150 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.493165 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.493194 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.493679 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.542199 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.546324 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.561502 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: W0123 11:52:36.577077 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-23a336e336cc42d48ff239b45b3c82c0a6be52dfb61b9879adf369428179e31b WatchSource:0}: Error finding container 23a336e336cc42d48ff239b45b3c82c0a6be52dfb61b9879adf369428179e31b: Status 404 returned error can't find the container with id 23a336e336cc42d48ff239b45b3c82c0a6be52dfb61b9879adf369428179e31b Jan 23 11:52:36 crc kubenswrapper[4865]: W0123 11:52:36.578760 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9e1b2b227f11ba0568d4e910d2e389a1782208d385d5d251ac53f22b4b774ed8 WatchSource:0}: Error finding container 9e1b2b227f11ba0568d4e910d2e389a1782208d385d5d251ac53f22b4b774ed8: Status 404 returned error can't find the container with id 9e1b2b227f11ba0568d4e910d2e389a1782208d385d5d251ac53f22b4b774ed8 Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.578867 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: W0123 11:52:36.581151 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d99a79b7318c409da3ac7b32e4ef23956ad8f03d9dfda57710172484a945d080 WatchSource:0}: Error finding container d99a79b7318c409da3ac7b32e4ef23956ad8f03d9dfda57710172484a945d080: Status 404 returned error can't find the container with id d99a79b7318c409da3ac7b32e4ef23956ad8f03d9dfda57710172484a945d080 Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.585235 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:36 crc kubenswrapper[4865]: W0123 11:52:36.592498 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-0bbc2a280f2d43f677c7951da8c4b3a202a776d57ca75860119784fb82722223 WatchSource:0}: Error finding container 0bbc2a280f2d43f677c7951da8c4b3a202a776d57ca75860119784fb82722223: Status 404 returned error can't find the container with id 0bbc2a280f2d43f677c7951da8c4b3a202a776d57ca75860119784fb82722223 Jan 23 11:52:36 crc kubenswrapper[4865]: W0123 11:52:36.602379 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-95d941767983db282248a423f8358e7a0f93a6045335eafcb445207a71ecf594 WatchSource:0}: Error finding container 95d941767983db282248a423f8358e7a0f93a6045335eafcb445207a71ecf594: Status 404 returned error can't find the container with id 95d941767983db282248a423f8358e7a0f93a6045335eafcb445207a71ecf594 Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.662835 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.893733 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.896209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.896543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.896559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:36 crc kubenswrapper[4865]: I0123 11:52:36.896586 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.896958 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Jan 23 11:52:36 crc kubenswrapper[4865]: W0123 11:52:36.901182 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 23 11:52:36 crc kubenswrapper[4865]: E0123 11:52:36.901241 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.056096 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.058142 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:48:57.025073188 +0000 UTC Jan 23 11:52:37 crc kubenswrapper[4865]: W0123 11:52:37.087132 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 23 11:52:37 crc kubenswrapper[4865]: E0123 11:52:37.087221 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.122834 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="47083bda0596965bdcbc763f43edb7d50e682c0717cb25757e247e49eea59a04" exitCode=0 Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.122928 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"47083bda0596965bdcbc763f43edb7d50e682c0717cb25757e247e49eea59a04"} Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.123024 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0bbc2a280f2d43f677c7951da8c4b3a202a776d57ca75860119784fb82722223"} Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.123114 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.124171 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.124197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.124206 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.124402 4865 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="13f5fe979055d878de5615c1bdac06f3dbb4a14a5ad02ccd769e0acce65d28f9" exitCode=0 Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.124453 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"13f5fe979055d878de5615c1bdac06f3dbb4a14a5ad02ccd769e0acce65d28f9"} Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.124474 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d99a79b7318c409da3ac7b32e4ef23956ad8f03d9dfda57710172484a945d080"} Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.124567 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.126228 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.126372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.126385 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.127468 4865 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494" exitCode=0 Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.127494 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494"} Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.127549 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"23a336e336cc42d48ff239b45b3c82c0a6be52dfb61b9879adf369428179e31b"} Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.127696 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.129069 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.129124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.129149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.130502 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4"} Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.130535 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e1b2b227f11ba0568d4e910d2e389a1782208d385d5d251ac53f22b4b774ed8"} Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.133184 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6" exitCode=0 Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.133227 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6"} Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.133251 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"95d941767983db282248a423f8358e7a0f93a6045335eafcb445207a71ecf594"} Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.133345 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.136681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.136762 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.136775 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.142009 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.143513 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.143595 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.143653 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:37 crc kubenswrapper[4865]: W0123 11:52:37.293014 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 23 11:52:37 crc kubenswrapper[4865]: E0123 11:52:37.293114 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 23 11:52:37 crc kubenswrapper[4865]: E0123 11:52:37.464203 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Jan 23 11:52:37 crc kubenswrapper[4865]: W0123 11:52:37.468952 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Jan 23 11:52:37 crc kubenswrapper[4865]: E0123 11:52:37.469007 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.697979 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.700514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.700555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.700566 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:37 crc kubenswrapper[4865]: I0123 11:52:37.700609 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 11:52:37 crc kubenswrapper[4865]: E0123 11:52:37.706183 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.058371 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:47:36.713391571 +0000 UTC Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.091915 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.137843 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.137892 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.137903 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.137913 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.139305 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e39ed654e93a5320123992387d8137d3cd051da4282d72952e463873dbe9b144"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.139392 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.140136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.140161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.140170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.141660 4865 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b72cdbeb350645b31d20c88b564042f00703f31595af579dc3496d4213af357a" exitCode=0 Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.141728 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b72cdbeb350645b31d20c88b564042f00703f31595af579dc3496d4213af357a"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.141816 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.142640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.142658 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.142666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.155053 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7e704eefb278d79695b1251a512db41c42cb4fe7f2b1a8a1a14ce8fea9b46b94"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.155112 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"13b15729a5822849eeeb33338a7049e7899e43c958eb3ee6acb5fbe4f4bab8ca"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.155124 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.155219 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.159942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.159989 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.160002 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.161640 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.161683 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.161696 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21"} Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.161711 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.167099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.167138 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:38 crc kubenswrapper[4865]: I0123 11:52:38.167151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.058908 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 16:12:17.408551418 +0000 UTC Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.170250 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2"} Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.170365 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.171500 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.171554 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.171571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.174761 4865 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="81ae1738684a8eabc5c38f86032bcbcd58f36290d431877e861e611bcf0a1116" exitCode=0 Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.174861 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"81ae1738684a8eabc5c38f86032bcbcd58f36290d431877e861e611bcf0a1116"} Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.175198 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.175435 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.176308 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.176364 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.176389 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.177170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.177216 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.177235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.191500 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.306343 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.308156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.308223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.308246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:39 crc kubenswrapper[4865]: I0123 11:52:39.308298 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.059075 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:09:32.313700111 +0000 UTC Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.183000 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"32d22e7eb1f69f58afd7cc8366caece7385ce349a9f6838dac8255ecbca769f1"} Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.183053 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6826d9e6f25e2df12839d21bb8ad3506f2f7fe5a358cf47f3a0aa61b1533da82"} Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.183069 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4b340619fcf055ece3f70923424892ce7912d47d3a328cbf387b747415f01511"} Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.183080 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.183132 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.184347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.184379 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.184390 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.439730 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.439898 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.442374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.442423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.442434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:40 crc kubenswrapper[4865]: I0123 11:52:40.446724 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.059714 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 11:52:03.302047563 +0000 UTC Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.106000 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.197919 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9414c4b34eff1c122a2bbc7d142b009f5a0bd6237032f73084beba1625113d2b"} Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.198000 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"52542b62e37525fa85644a95293f6ced96343128fc98b18da7853ace1ddb8881"} Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.198123 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.198161 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.198781 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.199032 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.199837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.199889 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.199890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.199955 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.199979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.199915 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.201254 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.201309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.201326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.612670 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.957086 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.957729 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.959808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.959992 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:41 crc kubenswrapper[4865]: I0123 11:52:41.960125 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.060830 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:15:13.596917524 +0000 UTC Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.200670 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.200744 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.200780 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.202473 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.202718 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.202899 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.202533 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.203231 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.203283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.345909 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.346122 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.346194 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.348145 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.348412 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.348660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:42 crc kubenswrapper[4865]: I0123 11:52:42.709137 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.062585 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 17:00:35.683778518 +0000 UTC Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.203494 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.203583 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.203584 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.205290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.205333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.205351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.205308 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.205431 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.205450 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.233299 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.233496 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.235299 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.235394 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.235423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:43 crc kubenswrapper[4865]: I0123 11:52:43.461766 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:44 crc kubenswrapper[4865]: I0123 11:52:44.063799 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:34:54.099658821 +0000 UTC Jan 23 11:52:44 crc kubenswrapper[4865]: I0123 11:52:44.206372 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:44 crc kubenswrapper[4865]: I0123 11:52:44.207781 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:44 crc kubenswrapper[4865]: I0123 11:52:44.207825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:44 crc kubenswrapper[4865]: I0123 11:52:44.207843 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:44 crc kubenswrapper[4865]: I0123 11:52:44.983089 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 11:52:44 crc kubenswrapper[4865]: I0123 11:52:44.983427 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:44 crc kubenswrapper[4865]: I0123 11:52:44.984999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:44 crc kubenswrapper[4865]: I0123 11:52:44.985083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:44 crc kubenswrapper[4865]: I0123 11:52:44.985114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:45 crc kubenswrapper[4865]: I0123 11:52:45.065905 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:22:01.678643894 +0000 UTC Jan 23 11:52:45 crc kubenswrapper[4865]: I0123 11:52:45.709911 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 11:52:45 crc kubenswrapper[4865]: I0123 11:52:45.710089 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 11:52:46 crc kubenswrapper[4865]: I0123 11:52:46.066546 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:20:50.028053221 +0000 UTC Jan 23 11:52:46 crc kubenswrapper[4865]: E0123 11:52:46.203764 4865 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 11:52:47 crc kubenswrapper[4865]: I0123 11:52:47.067453 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:48:26.287819031 +0000 UTC Jan 23 11:52:48 crc kubenswrapper[4865]: I0123 11:52:48.055923 4865 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 11:52:48 crc kubenswrapper[4865]: I0123 11:52:48.068508 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:04:30.492939409 +0000 UTC Jan 23 11:52:48 crc kubenswrapper[4865]: E0123 11:52:48.093397 4865 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 11:52:49 crc kubenswrapper[4865]: E0123 11:52:49.066550 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 23 11:52:49 crc kubenswrapper[4865]: I0123 11:52:49.069889 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:02:54.748694002 +0000 UTC Jan 23 11:52:49 crc kubenswrapper[4865]: W0123 11:52:49.163917 4865 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 23 11:52:49 crc kubenswrapper[4865]: I0123 11:52:49.164046 4865 trace.go:236] Trace[1457381375]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 11:52:39.162) (total time: 10001ms): Jan 23 11:52:49 crc kubenswrapper[4865]: Trace[1457381375]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:52:49.163) Jan 23 11:52:49 crc kubenswrapper[4865]: Trace[1457381375]: [10.001929948s] [10.001929948s] END Jan 23 11:52:49 crc kubenswrapper[4865]: E0123 11:52:49.164072 4865 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 23 11:52:49 crc kubenswrapper[4865]: I0123 11:52:49.191942 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 11:52:49 crc kubenswrapper[4865]: I0123 11:52:49.192057 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 11:52:49 crc kubenswrapper[4865]: I0123 11:52:49.324913 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 11:52:49 crc kubenswrapper[4865]: I0123 11:52:49.324977 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 11:52:50 crc kubenswrapper[4865]: I0123 11:52:50.070656 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:21:49.289334931 +0000 UTC Jan 23 11:52:51 crc kubenswrapper[4865]: I0123 11:52:51.070986 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:18:31.685787396 +0000 UTC Jan 23 11:52:51 crc kubenswrapper[4865]: I0123 11:52:51.646024 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 11:52:51 crc kubenswrapper[4865]: I0123 11:52:51.646264 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:51 crc kubenswrapper[4865]: I0123 11:52:51.647948 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:51 crc kubenswrapper[4865]: I0123 11:52:51.648150 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:51 crc kubenswrapper[4865]: I0123 11:52:51.648315 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:51 crc kubenswrapper[4865]: I0123 11:52:51.663280 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 11:52:52 crc kubenswrapper[4865]: I0123 11:52:52.071272 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:07:27.950677103 +0000 UTC Jan 23 11:52:52 crc kubenswrapper[4865]: I0123 11:52:52.227689 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:52 crc kubenswrapper[4865]: I0123 11:52:52.229182 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:52 crc kubenswrapper[4865]: I0123 11:52:52.229282 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:52 crc kubenswrapper[4865]: I0123 11:52:52.229364 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:52 crc kubenswrapper[4865]: I0123 11:52:52.260173 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 11:52:52 crc kubenswrapper[4865]: I0123 11:52:52.278638 4865 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 11:52:52 crc kubenswrapper[4865]: I0123 11:52:52.787522 4865 csr.go:261] certificate signing request csr-sflbx is approved, waiting to be issued Jan 23 11:52:52 crc kubenswrapper[4865]: I0123 11:52:52.840256 4865 csr.go:257] certificate signing request csr-sflbx is issued Jan 23 11:52:53 crc kubenswrapper[4865]: I0123 11:52:53.072315 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 08:12:32.337220062 +0000 UTC Jan 23 11:52:53 crc kubenswrapper[4865]: I0123 11:52:53.468915 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:53 crc kubenswrapper[4865]: I0123 11:52:53.469473 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:53 crc kubenswrapper[4865]: I0123 11:52:53.470918 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:53 crc kubenswrapper[4865]: I0123 11:52:53.471062 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:53 crc kubenswrapper[4865]: I0123 11:52:53.471172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:53 crc kubenswrapper[4865]: I0123 11:52:53.842200 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 11:47:52 +0000 UTC, rotation deadline is 2026-11-16 14:01:46.130339936 +0000 UTC Jan 23 11:52:53 crc kubenswrapper[4865]: I0123 11:52:53.842274 4865 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7130h8m52.288070083s for next certificate rotation Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.072846 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 13:12:35.849962065 +0000 UTC Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.202081 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.202345 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.203830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.203862 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.203874 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.206791 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.233147 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.233210 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.234168 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.234202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.234210 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.241393 4865 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.299539 4865 trace.go:236] Trace[1532006836]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 11:52:39.358) (total time: 14941ms): Jan 23 11:52:54 crc kubenswrapper[4865]: Trace[1532006836]: ---"Objects listed" error: 14941ms (11:52:54.299) Jan 23 11:52:54 crc kubenswrapper[4865]: Trace[1532006836]: [14.941241282s] [14.941241282s] END Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.299577 4865 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.299905 4865 trace.go:236] Trace[552031745]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 11:52:39.973) (total time: 14326ms): Jan 23 11:52:54 crc kubenswrapper[4865]: Trace[552031745]: ---"Objects listed" error: 14326ms (11:52:54.299) Jan 23 11:52:54 crc kubenswrapper[4865]: Trace[552031745]: [14.32611976s] [14.32611976s] END Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.299962 4865 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 11:52:54 crc kubenswrapper[4865]: E0123 11:52:54.305015 4865 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.305101 4865 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.306485 4865 trace.go:236] Trace[73316950]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 11:52:40.072) (total time: 14234ms): Jan 23 11:52:54 crc kubenswrapper[4865]: Trace[73316950]: ---"Objects listed" error: 14233ms (11:52:54.306) Jan 23 11:52:54 crc kubenswrapper[4865]: Trace[73316950]: [14.234117487s] [14.234117487s] END Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.306718 4865 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.366067 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.380318 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.380385 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.381967 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:54 crc kubenswrapper[4865]: I0123 11:52:54.389002 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.050491 4865 apiserver.go:52] "Watching apiserver" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.053884 4865 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.054326 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-l5tpj","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.054733 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.054811 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.055114 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.055107 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.055145 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.055239 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.055411 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.055415 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.055730 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.055845 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l5tpj" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.058954 4865 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.061948 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.062019 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.062212 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.062343 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.062456 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.062554 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.061947 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.062854 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.063011 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.063094 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.063258 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.065653 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.073101 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 11:50:17.296786979 +0000 UTC Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.082488 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.096194 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.108238 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.110523 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.110678 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.110781 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.110878 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.110978 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111042 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111058 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111066 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111180 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111207 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111227 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111245 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111264 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111282 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111304 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111325 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111344 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111381 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111396 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111412 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111431 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111451 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111467 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111499 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111516 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111533 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111579 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111618 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111645 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111662 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111711 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111726 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111741 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111764 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111800 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111818 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111833 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111853 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111873 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111893 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111910 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111929 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111948 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111965 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111983 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.111999 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112017 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112033 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112051 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112067 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112085 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112102 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112119 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112184 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112203 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112225 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112243 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112254 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112261 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112347 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112473 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112512 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112539 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112567 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112613 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112640 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112666 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112692 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112717 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112744 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112772 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112797 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112822 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112821 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112844 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112848 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.112849 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113000 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113010 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113036 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113074 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113104 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113135 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113161 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113195 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113237 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113265 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113289 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113318 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113342 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113366 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113397 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113421 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113448 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113474 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113500 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113528 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113568 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113615 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113642 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113680 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113703 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113730 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113764 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114383 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114417 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114435 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114453 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114470 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114489 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114505 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114525 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114543 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114562 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114583 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114624 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114644 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114662 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114680 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114699 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114719 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114737 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114755 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114771 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114788 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114809 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114827 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114844 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114860 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114876 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114922 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114938 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114954 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114971 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114988 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115009 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115030 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115072 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115093 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115109 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115129 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115147 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115164 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115181 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115204 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115221 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115237 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115253 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115288 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115346 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115364 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115384 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115401 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115420 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115437 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115454 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115473 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115490 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115508 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115523 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115539 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115555 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115572 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115588 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115625 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115644 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115664 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115681 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115696 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115712 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115730 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115747 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115765 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115785 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115801 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115816 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115834 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115851 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115867 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115887 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115905 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115925 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115943 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115962 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115979 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115996 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116013 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116028 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116045 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116061 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116077 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116093 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116111 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116144 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116163 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116181 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116198 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116216 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116234 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116252 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116272 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116317 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116341 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116386 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116408 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116433 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzj6j\" (UniqueName: \"kubernetes.io/projected/eedf452a-daa6-4d9c-94ed-ca47edac4448-kube-api-access-fzj6j\") pod \"node-resolver-l5tpj\" (UID: \"eedf452a-daa6-4d9c-94ed-ca47edac4448\") " pod="openshift-dns/node-resolver-l5tpj" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116455 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116475 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116496 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116514 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116532 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116551 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116568 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116587 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116621 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116643 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116660 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/eedf452a-daa6-4d9c-94ed-ca47edac4448-hosts-file\") pod \"node-resolver-l5tpj\" (UID: \"eedf452a-daa6-4d9c-94ed-ca47edac4448\") " pod="openshift-dns/node-resolver-l5tpj" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116751 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116766 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116778 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116788 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116809 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116823 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116834 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113530 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113667 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113749 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.113943 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114095 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114159 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114167 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114332 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114360 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114499 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114557 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114583 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114685 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.114966 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115098 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115292 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115361 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115811 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.121902 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.115892 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116395 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116553 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.116989 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.117301 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.117420 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.117833 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.118497 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.118781 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.118750 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.118945 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.118961 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.118980 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.119062 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.119186 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.119203 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.119379 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.119429 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.118875 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.119452 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.119671 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.119804 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.119955 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.120031 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.120080 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.120994 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.121015 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.121333 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.121623 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.122041 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.122191 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.122323 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.122520 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.122815 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.122821 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.123366 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.123411 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.123843 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.124021 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.124099 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.124409 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.124222 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.124554 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.124784 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.125172 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.125322 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.125355 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.125372 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.125579 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.126068 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.126115 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.126429 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.126918 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.127069 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.127070 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.127638 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.128034 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:52:55.627550906 +0000 UTC m=+19.796623322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.128546 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.128729 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.128867 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.129245 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.129770 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.129784 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.130257 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.130276 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.130428 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.130768 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.130929 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.130964 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.130983 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.131015 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.131057 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.131178 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.131407 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.131442 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.131623 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.131647 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.132356 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.132626 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.132720 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.133415 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.132750 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.133396 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.133641 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.133737 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.133958 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.134113 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.134239 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.134276 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.134815 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.134914 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.134876 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.135376 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.135508 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.135687 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.136298 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.136781 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.136984 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.139584 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.137239 4865 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.139888 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.140145 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.140562 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.137028 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.137185 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.137384 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.138896 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.138982 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.139006 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.142468 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.139020 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.141102 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.141759 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.149544 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.149966 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.149991 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.149995 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.150019 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.150672 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.150675 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.150719 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.150872 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.150957 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.151182 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.139164 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.151400 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.151510 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:55.651465463 +0000 UTC m=+19.820537689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.139097 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.152241 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.152709 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.153168 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.153199 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.153304 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.153955 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.153942 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.154450 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.155246 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.155526 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.156128 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.156557 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.156714 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.157034 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.157246 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.157152 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.157649 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.157666 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.157847 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.157914 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.158044 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.158163 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.158562 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.158771 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.158842 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.159480 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.159833 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.160863 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.161385 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.161414 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.162107 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.162407 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.162494 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.162643 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.162949 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.163010 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.163081 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.163108 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.163851 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.163923 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.164197 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.164223 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.164251 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.164284 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:55.664143605 +0000 UTC m=+19.833215831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.164708 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:55.664654607 +0000 UTC m=+19.833726833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.167223 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.165897 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:55.665864425 +0000 UTC m=+19.834936651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.167804 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.168119 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.173287 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.173511 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.174947 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.175110 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.175408 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.176838 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.177113 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.177401 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.179970 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.180076 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.181999 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.194488 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.194811 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.197510 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.205998 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.210906 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.217505 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.218151 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.218187 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.218373 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/eedf452a-daa6-4d9c-94ed-ca47edac4448-hosts-file\") pod \"node-resolver-l5tpj\" (UID: \"eedf452a-daa6-4d9c-94ed-ca47edac4448\") " pod="openshift-dns/node-resolver-l5tpj" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.218467 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.218543 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzj6j\" (UniqueName: \"kubernetes.io/projected/eedf452a-daa6-4d9c-94ed-ca47edac4448-kube-api-access-fzj6j\") pod \"node-resolver-l5tpj\" (UID: \"eedf452a-daa6-4d9c-94ed-ca47edac4448\") " pod="openshift-dns/node-resolver-l5tpj" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.218483 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.218677 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/eedf452a-daa6-4d9c-94ed-ca47edac4448-hosts-file\") pod \"node-resolver-l5tpj\" (UID: \"eedf452a-daa6-4d9c-94ed-ca47edac4448\") " pod="openshift-dns/node-resolver-l5tpj" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.218831 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.219178 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.219259 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.219332 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.219389 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.219442 4865 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.219522 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.219580 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.219926 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.220007 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.220061 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.220113 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.220172 4865 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.220229 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.220287 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.220342 4865 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.220765 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.220830 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221409 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221432 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221445 4865 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221457 4865 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221495 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221508 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221521 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221533 4865 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221568 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221583 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221634 4865 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221650 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221664 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221677 4865 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221717 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221739 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221752 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221793 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221835 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221848 4865 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221887 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221904 4865 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221917 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221931 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221968 4865 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221982 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.221996 4865 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222010 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222024 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222037 4865 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222050 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222064 4865 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222077 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222089 4865 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222101 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222113 4865 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222125 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222143 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222157 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222171 4865 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222184 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222199 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222213 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222226 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222237 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222247 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222257 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222269 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222279 4865 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222884 4865 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222905 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222919 4865 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222934 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222977 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.222991 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223004 4865 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223020 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223056 4865 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223069 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223081 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223093 4865 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223105 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223117 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223130 4865 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223142 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223151 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223162 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223172 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223182 4865 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223192 4865 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223201 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223212 4865 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223221 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223231 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223241 4865 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223251 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223262 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223321 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223332 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223342 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223352 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223364 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223374 4865 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223384 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223395 4865 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223405 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223418 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223428 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223437 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223473 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223484 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223493 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223504 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223516 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223526 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223560 4865 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223642 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223481 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223658 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223840 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223863 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223877 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223921 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223961 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223976 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.223988 4865 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224056 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224074 4865 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224089 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224101 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224117 4865 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224129 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224141 4865 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224153 4865 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224167 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224180 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224193 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224207 4865 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224220 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224234 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224249 4865 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224265 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224277 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224290 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224303 4865 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224316 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224328 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224343 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224357 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224369 4865 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224382 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224395 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224407 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224420 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224432 4865 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224445 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224456 4865 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224468 4865 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224480 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224501 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224514 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224526 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224537 4865 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224549 4865 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224561 4865 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224573 4865 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224584 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224615 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224628 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224647 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224660 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224671 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224688 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224701 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224714 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224725 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224737 4865 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224750 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224762 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224790 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224801 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224813 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224825 4865 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224836 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224851 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224863 4865 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224875 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224887 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224901 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224913 4865 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.224925 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.236631 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzj6j\" (UniqueName: \"kubernetes.io/projected/eedf452a-daa6-4d9c-94ed-ca47edac4448-kube-api-access-fzj6j\") pod \"node-resolver-l5tpj\" (UID: \"eedf452a-daa6-4d9c-94ed-ca47edac4448\") " pod="openshift-dns/node-resolver-l5tpj" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.237724 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.240406 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2" exitCode=255 Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.240471 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2"} Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.247038 4865 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.251497 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.252415 4865 scope.go:117] "RemoveContainer" containerID="93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.253722 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.266636 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.278649 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.289295 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.301324 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.313008 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.326250 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.336831 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.371033 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.379376 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.385726 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.391693 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l5tpj" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.630619 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.631269 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:52:56.631197021 +0000 UTC m=+20.800269247 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.731936 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.731996 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.732025 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.732058 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732201 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732208 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732260 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732275 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732229 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732345 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732352 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:56.73232863 +0000 UTC m=+20.901400846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732362 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732429 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:56.732399561 +0000 UTC m=+20.901471957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732469 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732483 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:56.732457173 +0000 UTC m=+20.901529389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: E0123 11:52:55.732507 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:56.732495783 +0000 UTC m=+20.901568009 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.857388 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-sgp5m"] Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.857936 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.860621 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.862409 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.864089 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.864337 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.864569 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.876660 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.891394 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.899978 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.902024 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.911010 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.921823 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.934637 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.935009 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfmcw\" (UniqueName: \"kubernetes.io/projected/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-kube-api-access-bfmcw\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.935064 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-proxy-tls\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.935099 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-rootfs\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.935141 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.952621 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.976652 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 11:52:55 crc kubenswrapper[4865]: I0123 11:52:55.979768 4865 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980090 4865 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980141 4865 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980189 4865 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980197 4865 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980220 4865 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980214 4865 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980279 4865 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980250 4865 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980258 4865 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980281 4865 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980312 4865 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980348 4865 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980454 4865 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980462 4865 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980542 4865 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980066 4865 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:55 crc kubenswrapper[4865]: W0123 11:52:55.980891 4865 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.035811 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.035922 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfmcw\" (UniqueName: \"kubernetes.io/projected/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-kube-api-access-bfmcw\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.035953 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-proxy-tls\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.035991 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-rootfs\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.036075 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-rootfs\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.068691 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-mcd-auth-proxy-config\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.073019 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-proxy-tls\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.073584 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfmcw\" (UniqueName: \"kubernetes.io/projected/1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b-kube-api-access-bfmcw\") pod \"machine-config-daemon-sgp5m\" (UID: \"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\") " pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.074103 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 18:46:58.249312946 +0000 UTC Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.121477 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.122145 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.123241 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.124158 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.124778 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.125327 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.125997 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.126591 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.127249 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.127788 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.128318 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.128988 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.129478 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.132315 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.132935 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.134068 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.134666 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.136349 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.137040 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.137652 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.138629 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.139287 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.139756 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.140779 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.141202 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.142301 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.143064 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.143945 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.144634 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.145723 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.146244 4865 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.146368 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.148384 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.148931 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.149335 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.150916 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.151885 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.152423 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.153474 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.154140 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.154991 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.155578 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.156579 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.157184 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.158017 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.158564 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.159526 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.160269 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.161227 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.161749 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.162652 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.163180 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.163941 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.164818 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.171412 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.245976 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.249489 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c"} Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.249676 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.250714 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"1666931dbbe14db985c5434594c2db286a86534e9e1573370340114ee1e59211"} Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.256042 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l5tpj" event={"ID":"eedf452a-daa6-4d9c-94ed-ca47edac4448","Type":"ContainerStarted","Data":"e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b"} Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.256090 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l5tpj" event={"ID":"eedf452a-daa6-4d9c-94ed-ca47edac4448","Type":"ContainerStarted","Data":"84f5addbdb755f722bd572560140fcfeaa5a49f2aec6d1b34f804fe048ffd566"} Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.261518 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659"} Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.261578 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755"} Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.261622 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b10058a5eede85f36832bc7bcce2607c418bbe0590c095b375e68a48128fc938"} Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.267720 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3"} Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.267781 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2f4808291653d7683bb6621cd5cefffc4d79b18434a3d5b65270fbd63b967f29"} Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.269268 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9f498f4451d48ac15b63657988146e56d529d40a4140ce1b080e21375dc94468"} Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.282664 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-cb8rs"] Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.283090 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.283944 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qwf88"] Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.284448 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.287541 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.287912 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.288395 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.288629 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.288764 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.297932 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.302148 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441266 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-system-cni-dir\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441320 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-os-release\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441340 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441418 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-run-k8s-cni-cncf-io\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441458 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-var-lib-cni-multus\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441479 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3d06336-44ac-4c17-899b-28cbfe2ee64d-cni-binary-copy\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441499 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-cni-binary-copy\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441517 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bn5g\" (UniqueName: \"kubernetes.io/projected/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-kube-api-access-7bn5g\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441535 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-run-netns\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441554 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9brx\" (UniqueName: \"kubernetes.io/projected/b3d06336-44ac-4c17-899b-28cbfe2ee64d-kube-api-access-w9brx\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441579 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-cnibin\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441608 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-cni-dir\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441626 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-conf-dir\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441644 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441665 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-cnibin\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441679 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-os-release\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441698 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-var-lib-cni-bin\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441714 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-daemon-config\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441738 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-var-lib-kubelet\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441752 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-hostroot\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441785 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-etc-kubernetes\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441802 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-run-multus-certs\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441820 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-system-cni-dir\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.441839 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-socket-dir-parent\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.545719 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.545795 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-cnibin\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.545818 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-os-release\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.545840 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-var-lib-cni-bin\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.545861 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-daemon-config\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.545891 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-var-lib-kubelet\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.545906 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-hostroot\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.545922 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-etc-kubernetes\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.546087 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-cnibin\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.546185 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-hostroot\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.546197 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-var-lib-kubelet\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.546210 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-var-lib-cni-bin\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.546398 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-etc-kubernetes\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.546553 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-os-release\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.546746 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.546896 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-daemon-config\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.546976 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-run-multus-certs\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547001 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-system-cni-dir\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547050 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-run-multus-certs\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547094 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-socket-dir-parent\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547284 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-system-cni-dir\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547322 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-system-cni-dir\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547393 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-system-cni-dir\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547450 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-socket-dir-parent\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547356 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-os-release\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547502 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547572 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-os-release\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547864 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-run-k8s-cni-cncf-io\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.548340 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.547744 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-run-k8s-cni-cncf-io\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.548432 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-var-lib-cni-multus\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.548458 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3d06336-44ac-4c17-899b-28cbfe2ee64d-cni-binary-copy\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.548507 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-var-lib-cni-multus\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.548550 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-cni-binary-copy\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549098 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-cni-binary-copy\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549224 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b3d06336-44ac-4c17-899b-28cbfe2ee64d-cni-binary-copy\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549273 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bn5g\" (UniqueName: \"kubernetes.io/projected/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-kube-api-access-7bn5g\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549320 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-run-netns\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549342 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9brx\" (UniqueName: \"kubernetes.io/projected/b3d06336-44ac-4c17-899b-28cbfe2ee64d-kube-api-access-w9brx\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549397 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-host-run-netns\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549442 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-cnibin\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549558 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-cni-dir\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549509 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-cnibin\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549669 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-conf-dir\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549759 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-cni-dir\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.549780 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b3d06336-44ac-4c17-899b-28cbfe2ee64d-multus-conf-dir\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.579305 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bn5g\" (UniqueName: \"kubernetes.io/projected/7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2-kube-api-access-7bn5g\") pod \"multus-additional-cni-plugins-qwf88\" (UID: \"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\") " pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.580190 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9brx\" (UniqueName: \"kubernetes.io/projected/b3d06336-44ac-4c17-899b-28cbfe2ee64d-kube-api-access-w9brx\") pod \"multus-cb8rs\" (UID: \"b3d06336-44ac-4c17-899b-28cbfe2ee64d\") " pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.600930 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cb8rs" Jan 23 11:52:56 crc kubenswrapper[4865]: W0123 11:52:56.613737 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3d06336_44ac_4c17_899b_28cbfe2ee64d.slice/crio-2469c1b89b3e69d06112b9437573ac8b0f1aea1818ff9a4a1bec493ed104dd9a WatchSource:0}: Error finding container 2469c1b89b3e69d06112b9437573ac8b0f1aea1818ff9a4a1bec493ed104dd9a: Status 404 returned error can't find the container with id 2469c1b89b3e69d06112b9437573ac8b0f1aea1818ff9a4a1bec493ed104dd9a Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.615201 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qwf88" Jan 23 11:52:56 crc kubenswrapper[4865]: W0123 11:52:56.628435 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b5ee7d8_e4b9_4df1_96b9_a922e6d801e2.slice/crio-52fa566586065869234c7dc1ac281c111af73fd3f91bd195e30bb7c5ee04259a WatchSource:0}: Error finding container 52fa566586065869234c7dc1ac281c111af73fd3f91bd195e30bb7c5ee04259a: Status 404 returned error can't find the container with id 52fa566586065869234c7dc1ac281c111af73fd3f91bd195e30bb7c5ee04259a Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.650254 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.650467 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:52:58.650424105 +0000 UTC m=+22.819496331 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.700031 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68shs"] Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.700976 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.706099 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.706191 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.706224 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.706324 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.706480 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.706489 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.706944 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.751041 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.751096 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.751121 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.751141 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751264 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751271 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751299 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751311 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751356 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751359 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751394 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751407 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751314 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:58.751295267 +0000 UTC m=+22.920367493 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751494 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:58.751473501 +0000 UTC m=+22.920545727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751512 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:58.751503352 +0000 UTC m=+22.920575578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:52:56 crc kubenswrapper[4865]: E0123 11:52:56.751524 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 11:52:58.751519312 +0000 UTC m=+22.920591538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852040 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ea3549b-3898-4d82-8240-2e062b4a6046-ovn-node-metrics-cert\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852162 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-netns\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852189 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-ovn\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852245 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-node-log\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852266 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-log-socket\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852314 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852343 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-etc-openvswitch\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852384 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-systemd\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852444 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-kubelet\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852499 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-var-lib-openvswitch\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852529 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-script-lib\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852557 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl88l\" (UniqueName: \"kubernetes.io/projected/4ea3549b-3898-4d82-8240-2e062b4a6046-kube-api-access-wl88l\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852579 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-env-overrides\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852630 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-openvswitch\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852654 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-slash\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852683 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-ovn-kubernetes\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852706 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-config\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852737 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-systemd-units\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852777 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-bin\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.852846 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-netd\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954275 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-slash\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954332 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-ovn-kubernetes\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954353 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-config\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954373 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-systemd-units\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954392 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-bin\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954418 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-netd\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954440 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ea3549b-3898-4d82-8240-2e062b4a6046-ovn-node-metrics-cert\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954457 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-log-socket\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954472 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-netns\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954489 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-ovn\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954503 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-node-log\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954519 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-etc-openvswitch\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954512 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-ovn-kubernetes\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954581 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954536 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954644 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-systemd\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954687 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-kubelet\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954707 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-var-lib-openvswitch\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954728 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-script-lib\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954748 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl88l\" (UniqueName: \"kubernetes.io/projected/4ea3549b-3898-4d82-8240-2e062b4a6046-kube-api-access-wl88l\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954783 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-env-overrides\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954812 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-openvswitch\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954885 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-openvswitch\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954914 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-slash\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954937 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-systemd\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954958 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-kubelet\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.954979 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-var-lib-openvswitch\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.955321 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-netns\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.955401 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-node-log\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.955464 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-ovn\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.955532 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-etc-openvswitch\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.955557 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-systemd-units\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.955578 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-netd\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.955614 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-config\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.955731 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-script-lib\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.956221 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-env-overrides\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.956279 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-bin\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.956561 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-log-socket\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.959984 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ea3549b-3898-4d82-8240-2e062b4a6046-ovn-node-metrics-cert\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.974464 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl88l\" (UniqueName: \"kubernetes.io/projected/4ea3549b-3898-4d82-8240-2e062b4a6046-kube-api-access-wl88l\") pod \"ovnkube-node-68shs\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:56 crc kubenswrapper[4865]: I0123 11:52:56.996575 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.015862 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.020283 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.021181 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: W0123 11:52:57.029373 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ea3549b_3898_4d82_8240_2e062b4a6046.slice/crio-876b7cec3fb24d1cf86c2c30e77fe6e0213e9433560b3b5f0239229f73236091 WatchSource:0}: Error finding container 876b7cec3fb24d1cf86c2c30e77fe6e0213e9433560b3b5f0239229f73236091: Status 404 returned error can't find the container with id 876b7cec3fb24d1cf86c2c30e77fe6e0213e9433560b3b5f0239229f73236091 Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.039406 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.054634 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.057797 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.074851 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 21:11:58.400741738 +0000 UTC Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.076455 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.087155 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.088382 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.092716 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.106977 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.117351 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:57 crc kubenswrapper[4865]: E0123 11:52:57.117746 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.117659 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:52:57 crc kubenswrapper[4865]: E0123 11:52:57.118208 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.117627 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:52:57 crc kubenswrapper[4865]: E0123 11:52:57.118372 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.120299 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.129109 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.145051 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.171137 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.182879 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.198316 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.210539 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.211732 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.221226 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.236267 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.236536 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.238498 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.250755 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.265132 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.273687 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cb8rs" event={"ID":"b3d06336-44ac-4c17-899b-28cbfe2ee64d","Type":"ContainerStarted","Data":"6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.273752 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cb8rs" event={"ID":"b3d06336-44ac-4c17-899b-28cbfe2ee64d","Type":"ContainerStarted","Data":"2469c1b89b3e69d06112b9437573ac8b0f1aea1818ff9a4a1bec493ed104dd9a"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.275085 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.276242 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.276297 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.277396 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"876b7cec3fb24d1cf86c2c30e77fe6e0213e9433560b3b5f0239229f73236091"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.279221 4865 generic.go:334] "Generic (PLEG): container finished" podID="7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2" containerID="594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32" exitCode=0 Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.279313 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" event={"ID":"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2","Type":"ContainerDied","Data":"594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.279377 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" event={"ID":"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2","Type":"ContainerStarted","Data":"52fa566586065869234c7dc1ac281c111af73fd3f91bd195e30bb7c5ee04259a"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.279875 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.292378 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.313472 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.331151 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.335636 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.349449 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.361643 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.364802 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.365318 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.368034 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.393399 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.403359 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.415052 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.429085 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.444517 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.445748 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.458973 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.478074 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.503015 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.505445 4865 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.507993 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.508098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.508160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.508344 4865 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.516830 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.530917 4865 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.531460 4865 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.533046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.533083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.533095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.533113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.533124 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:57Z","lastTransitionTime":"2026-01-23T11:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.533936 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.535679 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.550586 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: E0123 11:52:57.560124 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.563995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.564127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.564211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.564282 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.564352 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:57Z","lastTransitionTime":"2026-01-23T11:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.578962 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: E0123 11:52:57.585420 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.589764 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.589822 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.589839 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.589859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.589872 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:57Z","lastTransitionTime":"2026-01-23T11:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.601346 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: E0123 11:52:57.605979 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.609420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.609446 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.609455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.609472 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.609483 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:57Z","lastTransitionTime":"2026-01-23T11:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.620535 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: E0123 11:52:57.631587 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.643469 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.645844 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.645882 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.645893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.645912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.645923 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:57Z","lastTransitionTime":"2026-01-23T11:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:57 crc kubenswrapper[4865]: E0123 11:52:57.660986 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: E0123 11:52:57.661109 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.661399 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.663992 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.664016 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.664026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.664044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.664055 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:57Z","lastTransitionTime":"2026-01-23T11:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.677743 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.708990 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.745979 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.769753 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.770122 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.770143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.770156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.770175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.770191 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:57Z","lastTransitionTime":"2026-01-23T11:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.803069 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:57Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.873434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.873475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.873487 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.873502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.873514 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:57Z","lastTransitionTime":"2026-01-23T11:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.976470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.976506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.976514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.976533 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:57 crc kubenswrapper[4865]: I0123 11:52:57.976548 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:57Z","lastTransitionTime":"2026-01-23T11:52:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.076848 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:58:47.113191331 +0000 UTC Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.079031 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.079058 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.079068 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.079082 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.079091 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:58Z","lastTransitionTime":"2026-01-23T11:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.181878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.181932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.181945 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.181967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.181983 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:58Z","lastTransitionTime":"2026-01-23T11:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.232854 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wrntt"] Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.233440 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.235892 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.236166 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.236304 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.238269 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.249999 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.264365 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.279966 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.284695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.284737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.284751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.284774 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.284788 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:58Z","lastTransitionTime":"2026-01-23T11:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.286401 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.288281 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006" exitCode=0 Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.288376 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.291912 4865 generic.go:334] "Generic (PLEG): container finished" podID="7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2" containerID="5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c" exitCode=0 Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.292136 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" event={"ID":"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2","Type":"ContainerDied","Data":"5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.314693 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.338645 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.353128 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.371006 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0c4178d-fd12-43de-a232-00b6b7ed5866-host\") pod \"node-ca-wrntt\" (UID: \"e0c4178d-fd12-43de-a232-00b6b7ed5866\") " pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.371114 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e0c4178d-fd12-43de-a232-00b6b7ed5866-serviceca\") pod \"node-ca-wrntt\" (UID: \"e0c4178d-fd12-43de-a232-00b6b7ed5866\") " pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.371208 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nkjv\" (UniqueName: \"kubernetes.io/projected/e0c4178d-fd12-43de-a232-00b6b7ed5866-kube-api-access-7nkjv\") pod \"node-ca-wrntt\" (UID: \"e0c4178d-fd12-43de-a232-00b6b7ed5866\") " pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.378789 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.392719 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.392763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.392773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.392791 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.392804 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:58Z","lastTransitionTime":"2026-01-23T11:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.395957 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.413863 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.427851 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.454322 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.469946 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.471671 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e0c4178d-fd12-43de-a232-00b6b7ed5866-serviceca\") pod \"node-ca-wrntt\" (UID: \"e0c4178d-fd12-43de-a232-00b6b7ed5866\") " pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.471724 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nkjv\" (UniqueName: \"kubernetes.io/projected/e0c4178d-fd12-43de-a232-00b6b7ed5866-kube-api-access-7nkjv\") pod \"node-ca-wrntt\" (UID: \"e0c4178d-fd12-43de-a232-00b6b7ed5866\") " pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.471751 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0c4178d-fd12-43de-a232-00b6b7ed5866-host\") pod \"node-ca-wrntt\" (UID: \"e0c4178d-fd12-43de-a232-00b6b7ed5866\") " pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.471825 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0c4178d-fd12-43de-a232-00b6b7ed5866-host\") pod \"node-ca-wrntt\" (UID: \"e0c4178d-fd12-43de-a232-00b6b7ed5866\") " pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.472823 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e0c4178d-fd12-43de-a232-00b6b7ed5866-serviceca\") pod \"node-ca-wrntt\" (UID: \"e0c4178d-fd12-43de-a232-00b6b7ed5866\") " pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.495247 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.495650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.495669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.495678 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.495694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.495709 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:58Z","lastTransitionTime":"2026-01-23T11:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.500113 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nkjv\" (UniqueName: \"kubernetes.io/projected/e0c4178d-fd12-43de-a232-00b6b7ed5866-kube-api-access-7nkjv\") pod \"node-ca-wrntt\" (UID: \"e0c4178d-fd12-43de-a232-00b6b7ed5866\") " pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.512921 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.528098 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.546532 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wrntt" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.554959 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.576135 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.590107 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.601332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.601368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.601377 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.601394 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.601405 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:58Z","lastTransitionTime":"2026-01-23T11:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.612439 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.623503 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.638275 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.651261 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.665518 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.673881 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.674008 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:53:02.673969657 +0000 UTC m=+26.843041883 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.682946 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.696812 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.705552 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.705640 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.705681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.705703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.705716 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:58Z","lastTransitionTime":"2026-01-23T11:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.709242 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.727178 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.741859 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:58Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.774723 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.774810 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.774837 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775000 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.775046 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775088 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775084 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775192 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:02.775177568 +0000 UTC m=+26.944249784 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775206 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775029 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775276 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775293 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775294 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:02.7752714 +0000 UTC m=+26.944343626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775424 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:02.775381953 +0000 UTC m=+26.944454179 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775530 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:52:58 crc kubenswrapper[4865]: E0123 11:52:58.775627 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:02.775564627 +0000 UTC m=+26.944636853 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.808075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.808116 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.808128 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.808149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.808159 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:58Z","lastTransitionTime":"2026-01-23T11:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.910824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.911263 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.911273 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.911290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:58 crc kubenswrapper[4865]: I0123 11:52:58.911317 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:58Z","lastTransitionTime":"2026-01-23T11:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.014027 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.014066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.014074 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.014115 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.014126 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:59Z","lastTransitionTime":"2026-01-23T11:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.097752 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 05:50:55.532285726 +0000 UTC Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.117003 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:52:59 crc kubenswrapper[4865]: E0123 11:52:59.117126 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.117278 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:52:59 crc kubenswrapper[4865]: E0123 11:52:59.117473 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.117516 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:52:59 crc kubenswrapper[4865]: E0123 11:52:59.117566 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.117675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.117693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.117701 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.117714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.117726 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:59Z","lastTransitionTime":"2026-01-23T11:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.222731 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.222770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.222801 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.222842 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.222856 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:59Z","lastTransitionTime":"2026-01-23T11:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.300084 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wrntt" event={"ID":"e0c4178d-fd12-43de-a232-00b6b7ed5866","Type":"ContainerStarted","Data":"39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.300140 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wrntt" event={"ID":"e0c4178d-fd12-43de-a232-00b6b7ed5866","Type":"ContainerStarted","Data":"75c05107b94322a727ced2bff53837c39d8e7207300a827f980be200a32a5007"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.308568 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.308630 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.308643 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.308654 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.308665 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.308692 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.312511 4865 generic.go:334] "Generic (PLEG): container finished" podID="7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2" containerID="55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816" exitCode=0 Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.312634 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" event={"ID":"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2","Type":"ContainerDied","Data":"55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.321528 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.326834 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.326859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.326867 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.326881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.326894 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:59Z","lastTransitionTime":"2026-01-23T11:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.344265 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.367443 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.398555 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.417061 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.431641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.431681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.431694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.431716 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.431731 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:59Z","lastTransitionTime":"2026-01-23T11:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.436179 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.457260 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.477001 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.496770 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.513636 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.528328 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.534179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.534226 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.534238 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.534261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.534277 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:59Z","lastTransitionTime":"2026-01-23T11:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.542130 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.555414 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.568366 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.581804 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.593470 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.604734 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.621337 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.636910 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.636974 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.636985 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.637005 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.637019 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:59Z","lastTransitionTime":"2026-01-23T11:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.655653 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.676058 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.694022 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.714921 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.731036 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.739899 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.739954 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.739966 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.739984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.740001 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:59Z","lastTransitionTime":"2026-01-23T11:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.748680 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.764035 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.778457 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.800263 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.817594 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:52:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.844004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.844080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.844128 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.844158 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.844177 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:59Z","lastTransitionTime":"2026-01-23T11:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.947641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.947716 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.947735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.947765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:52:59 crc kubenswrapper[4865]: I0123 11:52:59.947785 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:52:59Z","lastTransitionTime":"2026-01-23T11:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.052730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.052798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.052817 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.052847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.052868 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:00Z","lastTransitionTime":"2026-01-23T11:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.099119 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 19:11:58.108545077 +0000 UTC Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.155915 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.156219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.156306 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.156405 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.156498 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:00Z","lastTransitionTime":"2026-01-23T11:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.259466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.259720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.259783 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.259880 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.259981 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:00Z","lastTransitionTime":"2026-01-23T11:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.318609 4865 generic.go:334] "Generic (PLEG): container finished" podID="7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2" containerID="e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757" exitCode=0 Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.318656 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" event={"ID":"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2","Type":"ContainerDied","Data":"e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.362766 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.362816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.362829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.362850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.362863 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:00Z","lastTransitionTime":"2026-01-23T11:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.363676 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.385757 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.402084 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.415933 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.430557 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.445012 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.471704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.471898 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.471914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.472048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.472063 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:00Z","lastTransitionTime":"2026-01-23T11:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.473675 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.488449 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.500819 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.513320 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.529801 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.545825 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.568236 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.576551 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.576626 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.576637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.576660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.576672 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:00Z","lastTransitionTime":"2026-01-23T11:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.582669 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:00Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.679436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.679503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.679520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.679550 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.679574 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:00Z","lastTransitionTime":"2026-01-23T11:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.783546 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.783688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.783708 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.784145 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.784409 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:00Z","lastTransitionTime":"2026-01-23T11:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.887886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.887961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.887973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.887996 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.888009 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:00Z","lastTransitionTime":"2026-01-23T11:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.991513 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.991586 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.991643 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.991673 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:00 crc kubenswrapper[4865]: I0123 11:53:00.991699 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:00Z","lastTransitionTime":"2026-01-23T11:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.097135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.097588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.097643 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.097676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.097696 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:01Z","lastTransitionTime":"2026-01-23T11:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.099510 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 07:27:00.234301572 +0000 UTC Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.117888 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.117931 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:01 crc kubenswrapper[4865]: E0123 11:53:01.118149 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.118855 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:01 crc kubenswrapper[4865]: E0123 11:53:01.119014 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:01 crc kubenswrapper[4865]: E0123 11:53:01.119159 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.200799 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.200836 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.200847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.200864 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.200878 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:01Z","lastTransitionTime":"2026-01-23T11:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.305257 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.305310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.305319 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.305337 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.305350 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:01Z","lastTransitionTime":"2026-01-23T11:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.330668 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.336293 4865 generic.go:334] "Generic (PLEG): container finished" podID="7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2" containerID="b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377" exitCode=0 Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.336352 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" event={"ID":"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2","Type":"ContainerDied","Data":"b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.363907 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.387183 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.403353 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.408040 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.408093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.408104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.408125 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.408140 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:01Z","lastTransitionTime":"2026-01-23T11:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.422868 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.438135 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.451875 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.471611 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.489089 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.504008 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.511910 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.511961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.511972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.511997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.512011 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:01Z","lastTransitionTime":"2026-01-23T11:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.517347 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.534820 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.546669 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.558170 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.572591 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:01Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.614933 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.614976 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.614984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.614999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.615012 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:01Z","lastTransitionTime":"2026-01-23T11:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.724885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.724933 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.724944 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.724959 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.724973 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:01Z","lastTransitionTime":"2026-01-23T11:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.827499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.827540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.827574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.827591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.827614 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:01Z","lastTransitionTime":"2026-01-23T11:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.930883 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.930938 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.930953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.930975 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:01 crc kubenswrapper[4865]: I0123 11:53:01.930989 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:01Z","lastTransitionTime":"2026-01-23T11:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.040086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.040122 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.040130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.040145 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.040155 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:02Z","lastTransitionTime":"2026-01-23T11:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.102931 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 00:54:51.970882382 +0000 UTC Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.147173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.147224 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.147236 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.147254 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.147264 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:02Z","lastTransitionTime":"2026-01-23T11:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.250289 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.250329 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.250340 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.250358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.250371 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:02Z","lastTransitionTime":"2026-01-23T11:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.345291 4865 generic.go:334] "Generic (PLEG): container finished" podID="7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2" containerID="24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca" exitCode=0 Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.345391 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" event={"ID":"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2","Type":"ContainerDied","Data":"24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.353512 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.353572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.353591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.353651 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.353675 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:02Z","lastTransitionTime":"2026-01-23T11:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.371456 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.396028 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.419981 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.441312 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.460235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.460313 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.460335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.460365 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.460387 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:02Z","lastTransitionTime":"2026-01-23T11:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.463476 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.484177 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.500975 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.516976 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.529352 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.541449 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.556342 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.562037 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.562070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.562078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.562094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.562105 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:02Z","lastTransitionTime":"2026-01-23T11:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.571792 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.593289 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.605728 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:02Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.668078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.668135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.668148 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.668171 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.668185 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:02Z","lastTransitionTime":"2026-01-23T11:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.748766 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.749156 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:53:10.749093827 +0000 UTC m=+34.918166113 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.772809 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.772862 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.772874 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.772898 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.772912 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:02Z","lastTransitionTime":"2026-01-23T11:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.849544 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.849588 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.849631 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.849655 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.849761 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.849791 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.849810 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.849823 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.849849 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:10.849828617 +0000 UTC m=+35.018900843 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.849868 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:10.849858887 +0000 UTC m=+35.018931113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.849923 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.849971 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.849967 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.850147 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:10.850116623 +0000 UTC m=+35.019188849 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.849990 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:02 crc kubenswrapper[4865]: E0123 11:53:02.850297 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:10.850271147 +0000 UTC m=+35.019343563 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.875619 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.875670 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.875683 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.875704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.875716 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:02Z","lastTransitionTime":"2026-01-23T11:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.978414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.978501 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.978521 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.978563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:02 crc kubenswrapper[4865]: I0123 11:53:02.978578 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:02Z","lastTransitionTime":"2026-01-23T11:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.082046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.082113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.082133 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.082160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.082179 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:03Z","lastTransitionTime":"2026-01-23T11:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.103540 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:53:39.946659796 +0000 UTC Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.117124 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.117176 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:03 crc kubenswrapper[4865]: E0123 11:53:03.117347 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:03 crc kubenswrapper[4865]: E0123 11:53:03.117520 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.117802 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:03 crc kubenswrapper[4865]: E0123 11:53:03.118014 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.187192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.187237 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.187249 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.187270 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.187286 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:03Z","lastTransitionTime":"2026-01-23T11:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.292149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.292213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.292232 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.292291 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.292310 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:03Z","lastTransitionTime":"2026-01-23T11:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.356228 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" event={"ID":"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2","Type":"ContainerStarted","Data":"5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093"} Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.376338 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.392702 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.395475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.395524 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.395541 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.395569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.395585 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:03Z","lastTransitionTime":"2026-01-23T11:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.427399 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.443891 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.462161 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.486748 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.498030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.498084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.498103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.498129 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.498145 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:03Z","lastTransitionTime":"2026-01-23T11:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.507529 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.528944 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.541373 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.556615 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.571117 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.589533 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.600653 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.600696 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.600706 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.600724 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.600737 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:03Z","lastTransitionTime":"2026-01-23T11:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.604611 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.619878 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:03Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.703112 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.703154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.703163 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.703180 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.703190 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:03Z","lastTransitionTime":"2026-01-23T11:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.806336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.806378 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.806387 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.806406 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.806418 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:03Z","lastTransitionTime":"2026-01-23T11:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.908886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.908936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.908953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.908977 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:03 crc kubenswrapper[4865]: I0123 11:53:03.908993 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:03Z","lastTransitionTime":"2026-01-23T11:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.012266 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.012310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.012322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.012342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.012356 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:04Z","lastTransitionTime":"2026-01-23T11:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.104496 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:11:14.423757951 +0000 UTC Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.115455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.115536 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.115561 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.115634 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.115663 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:04Z","lastTransitionTime":"2026-01-23T11:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.219066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.219102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.219113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.219129 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.219140 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:04Z","lastTransitionTime":"2026-01-23T11:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.322298 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.322342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.322355 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.322375 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.322388 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:04Z","lastTransitionTime":"2026-01-23T11:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.369151 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.369554 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.400980 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.409090 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.422563 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.425489 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.425540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.425553 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.425576 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.425590 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:04Z","lastTransitionTime":"2026-01-23T11:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.438315 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.456422 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.480392 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.499348 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.513052 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.529269 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.529339 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.529352 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.529375 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.529389 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:04Z","lastTransitionTime":"2026-01-23T11:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.531745 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.546983 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.562433 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.580662 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.596642 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.611686 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.625680 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.632268 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.632294 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.632305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.632321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.632333 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:04Z","lastTransitionTime":"2026-01-23T11:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.643298 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.655970 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.671139 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.688869 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.704632 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.726809 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.734972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.735037 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.735047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.735065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.735078 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:04Z","lastTransitionTime":"2026-01-23T11:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.740353 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.757563 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.772642 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.787826 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.803556 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.817444 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.829531 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.837902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.837972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.837986 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.838006 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.838041 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:04Z","lastTransitionTime":"2026-01-23T11:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.847696 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:04Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.942465 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.942539 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.942563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.942595 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:04 crc kubenswrapper[4865]: I0123 11:53:04.942654 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:04Z","lastTransitionTime":"2026-01-23T11:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.046567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.047184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.047212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.047240 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.047260 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:05Z","lastTransitionTime":"2026-01-23T11:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.104913 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:06:05.461407845 +0000 UTC Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.117324 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.117398 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.117416 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:05 crc kubenswrapper[4865]: E0123 11:53:05.117767 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:05 crc kubenswrapper[4865]: E0123 11:53:05.117771 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:05 crc kubenswrapper[4865]: E0123 11:53:05.117847 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.150060 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.150103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.150118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.150136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.150149 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:05Z","lastTransitionTime":"2026-01-23T11:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.253396 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.253458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.253476 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.253506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.253526 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:05Z","lastTransitionTime":"2026-01-23T11:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.355806 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.355869 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.355883 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.355903 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.355915 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:05Z","lastTransitionTime":"2026-01-23T11:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.375306 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.376009 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.414034 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.427095 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.443356 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.458206 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.458245 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.458258 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.458275 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.458286 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:05Z","lastTransitionTime":"2026-01-23T11:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.459273 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.472574 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.487995 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.505139 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.527159 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.541542 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.561784 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.561862 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.561885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.561909 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.561926 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:05Z","lastTransitionTime":"2026-01-23T11:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.589096 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.619540 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.650586 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.664474 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.664512 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.664522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.664537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.664550 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:05Z","lastTransitionTime":"2026-01-23T11:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.672884 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.698026 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.713949 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:05Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.767043 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.767434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.767496 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.767627 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.767738 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:05Z","lastTransitionTime":"2026-01-23T11:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.870179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.870234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.870251 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.870277 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.870294 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:05Z","lastTransitionTime":"2026-01-23T11:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.973962 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.974038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.974066 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.974104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:05 crc kubenswrapper[4865]: I0123 11:53:05.974128 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:05Z","lastTransitionTime":"2026-01-23T11:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.076848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.076924 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.076944 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.076970 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.076991 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:06Z","lastTransitionTime":"2026-01-23T11:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.106205 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:25:12.084032915 +0000 UTC Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.139501 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.160827 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.180590 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.180647 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.180658 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.180677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.180696 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:06Z","lastTransitionTime":"2026-01-23T11:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.185079 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.213328 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.238023 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.262953 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.283988 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.284252 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.284334 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.284414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.284113 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.284486 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:06Z","lastTransitionTime":"2026-01-23T11:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.315500 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.332570 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.347152 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.375866 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.383987 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/0.log" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.388078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.388120 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.388134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.388155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.388168 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:06Z","lastTransitionTime":"2026-01-23T11:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.391022 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf" exitCode=1 Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.391109 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf"} Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.392386 4865 scope.go:117] "RemoveContainer" containerID="4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.405102 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.439754 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.459282 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.481044 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.493848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.493905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.493917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.493938 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.493952 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:06Z","lastTransitionTime":"2026-01-23T11:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.506778 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.522871 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.538158 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.556596 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.575002 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.591180 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.596676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.596726 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.596736 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.596755 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.596768 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:06Z","lastTransitionTime":"2026-01-23T11:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.607210 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.620904 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.640821 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.667853 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.700305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.700349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.700366 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.700391 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.700410 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:06Z","lastTransitionTime":"2026-01-23T11:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.704648 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:06Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 11:53:05.845267 6100 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 11:53:05.845297 6100 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 11:53:05.845325 6100 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 11:53:05.845337 6100 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 11:53:05.845346 6100 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 11:53:05.845364 6100 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 11:53:05.845381 6100 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 11:53:05.845395 6100 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 11:53:05.845433 6100 factory.go:656] Stopping watch factory\\\\nI0123 11:53:05.845453 6100 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 11:53:05.845465 6100 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 11:53:05.845474 6100 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 11:53:05.845366 6100 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 11:53:05.845488 6100 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 11:53:05.845502 6100 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.721099 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.739471 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:06Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.803829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.803878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.803888 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.803903 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.803914 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:06Z","lastTransitionTime":"2026-01-23T11:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.906931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.906973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.906983 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.906997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:06 crc kubenswrapper[4865]: I0123 11:53:06.907006 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:06Z","lastTransitionTime":"2026-01-23T11:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.008804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.008838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.008864 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.008878 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.008888 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.106357 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 22:23:01.655852841 +0000 UTC Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.111904 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.112338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.112348 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.112363 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.112389 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.117168 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.117178 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:07 crc kubenswrapper[4865]: E0123 11:53:07.117301 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.117328 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:07 crc kubenswrapper[4865]: E0123 11:53:07.117385 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:07 crc kubenswrapper[4865]: E0123 11:53:07.117558 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.215245 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.215309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.215329 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.215357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.215389 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.318336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.318384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.318398 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.318417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.318430 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.398254 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/0.log" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.406461 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.406618 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.420868 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.420914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.420925 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.420942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.420961 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.424823 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.441145 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.456528 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.482108 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.502089 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.525326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.525384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.525405 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.525432 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.525452 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.529562 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:06Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 11:53:05.845267 6100 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 11:53:05.845297 6100 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 11:53:05.845325 6100 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 11:53:05.845337 6100 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 11:53:05.845346 6100 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 11:53:05.845364 6100 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 11:53:05.845381 6100 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 11:53:05.845395 6100 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 11:53:05.845433 6100 factory.go:656] Stopping watch factory\\\\nI0123 11:53:05.845453 6100 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 11:53:05.845465 6100 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 11:53:05.845474 6100 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 11:53:05.845366 6100 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 11:53:05.845488 6100 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 11:53:05.845502 6100 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.546572 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.562981 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.579064 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.596534 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.613918 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.629419 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.629478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.629495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.629518 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.629535 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.633639 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.651491 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.667588 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.718859 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.731946 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.732256 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.732367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.732575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.732774 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.761337 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.761672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.761860 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.762044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.762217 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: E0123 11:53:07.780682 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.786195 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.786479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.786627 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.786711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.786875 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: E0123 11:53:07.802664 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.807475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.807549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.807563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.807588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.807630 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: E0123 11:53:07.821793 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.827585 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.827668 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.827684 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.827710 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.827726 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: E0123 11:53:07.842375 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.847348 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.847391 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.847399 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.847414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.847426 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: E0123 11:53:07.868590 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:07 crc kubenswrapper[4865]: E0123 11:53:07.868796 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.871043 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.871117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.871137 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.871165 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.871185 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.973837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.973903 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.973917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.973939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:07 crc kubenswrapper[4865]: I0123 11:53:07.973953 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:07Z","lastTransitionTime":"2026-01-23T11:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.077011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.077075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.077087 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.077108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.077121 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:08Z","lastTransitionTime":"2026-01-23T11:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.107472 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:00:34.33883872 +0000 UTC Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.180280 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.180365 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.180390 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.180450 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.180471 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:08Z","lastTransitionTime":"2026-01-23T11:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.284704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.284773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.284792 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.284820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.284839 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:08Z","lastTransitionTime":"2026-01-23T11:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.387705 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.387795 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.387819 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.387854 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.387875 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:08Z","lastTransitionTime":"2026-01-23T11:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.412875 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/1.log" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.414149 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/0.log" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.418572 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7" exitCode=1 Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.418657 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.418784 4865 scope.go:117] "RemoveContainer" containerID="4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.419475 4865 scope.go:117] "RemoveContainer" containerID="84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7" Jan 23 11:53:08 crc kubenswrapper[4865]: E0123 11:53:08.419711 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.448993 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.465012 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.491040 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.491088 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.491099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.491118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.491133 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:08Z","lastTransitionTime":"2026-01-23T11:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.502258 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:06Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 11:53:05.845267 6100 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 11:53:05.845297 6100 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 11:53:05.845325 6100 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 11:53:05.845337 6100 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 11:53:05.845346 6100 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 11:53:05.845364 6100 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 11:53:05.845381 6100 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 11:53:05.845395 6100 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 11:53:05.845433 6100 factory.go:656] Stopping watch factory\\\\nI0123 11:53:05.845453 6100 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 11:53:05.845465 6100 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 11:53:05.845474 6100 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 11:53:05.845366 6100 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 11:53:05.845488 6100 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 11:53:05.845502 6100 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"perator-58b4c7f79c-55gtf in node crc\\\\nI0123 11:53:07.308217 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 11:53:07.308222 6215 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0123 11:53:07.308137 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0123 11:53:07.308170 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z]\\\\nI0123 11:53:07.308220 6215 obj_retry.go:386] R\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.522909 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.551803 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.569794 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.589472 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5"] Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.590372 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.593773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.593917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.594021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.594122 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.594221 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:08Z","lastTransitionTime":"2026-01-23T11:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.596479 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.597203 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.597584 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.617358 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.617997 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bd2bc34-d218-45e1-b168-a304fab36d86-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.618086 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppkvv\" (UniqueName: \"kubernetes.io/projected/5bd2bc34-d218-45e1-b168-a304fab36d86-kube-api-access-ppkvv\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.618124 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bd2bc34-d218-45e1-b168-a304fab36d86-env-overrides\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.618171 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bd2bc34-d218-45e1-b168-a304fab36d86-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.634467 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.650205 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.667763 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.685295 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.697695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.697758 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.697775 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.697804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.697821 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:08Z","lastTransitionTime":"2026-01-23T11:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.702357 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.715704 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.718987 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bd2bc34-d218-45e1-b168-a304fab36d86-env-overrides\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.719060 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bd2bc34-d218-45e1-b168-a304fab36d86-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.719097 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bd2bc34-d218-45e1-b168-a304fab36d86-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.719148 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppkvv\" (UniqueName: \"kubernetes.io/projected/5bd2bc34-d218-45e1-b168-a304fab36d86-kube-api-access-ppkvv\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.719716 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5bd2bc34-d218-45e1-b168-a304fab36d86-env-overrides\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.720062 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5bd2bc34-d218-45e1-b168-a304fab36d86-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.728980 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5bd2bc34-d218-45e1-b168-a304fab36d86-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.728973 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.740733 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppkvv\" (UniqueName: \"kubernetes.io/projected/5bd2bc34-d218-45e1-b168-a304fab36d86-kube-api-access-ppkvv\") pod \"ovnkube-control-plane-749d76644c-54mz5\" (UID: \"5bd2bc34-d218-45e1-b168-a304fab36d86\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.743646 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.769399 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.785011 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.800649 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.800706 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.800720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.800743 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.800759 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:08Z","lastTransitionTime":"2026-01-23T11:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.808971 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4dd532c2daaaf2b4cd706d4c736c732967bf12eb74217455a01028c46d880aaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:06Z\\\",\\\"message\\\":\\\"/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 11:53:05.845267 6100 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 11:53:05.845297 6100 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 11:53:05.845325 6100 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 11:53:05.845337 6100 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 11:53:05.845346 6100 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 11:53:05.845364 6100 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 11:53:05.845381 6100 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 11:53:05.845395 6100 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 11:53:05.845433 6100 factory.go:656] Stopping watch factory\\\\nI0123 11:53:05.845453 6100 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 11:53:05.845465 6100 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 11:53:05.845474 6100 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 11:53:05.845366 6100 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 11:53:05.845488 6100 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 11:53:05.845502 6100 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"perator-58b4c7f79c-55gtf in node crc\\\\nI0123 11:53:07.308217 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 11:53:07.308222 6215 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0123 11:53:07.308137 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0123 11:53:07.308170 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z]\\\\nI0123 11:53:07.308220 6215 obj_retry.go:386] R\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.823636 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.840519 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.856741 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.877058 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.901976 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.903881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.903950 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.903971 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.903998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.904017 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:08Z","lastTransitionTime":"2026-01-23T11:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.914310 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.928109 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: W0123 11:53:08.931918 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bd2bc34_d218_45e1_b168_a304fab36d86.slice/crio-8fb45f689e7db3da01f304bf9288fc538033919d985f0aadecd7c3f705c57119 WatchSource:0}: Error finding container 8fb45f689e7db3da01f304bf9288fc538033919d985f0aadecd7c3f705c57119: Status 404 returned error can't find the container with id 8fb45f689e7db3da01f304bf9288fc538033919d985f0aadecd7c3f705c57119 Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.956423 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.976219 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:08 crc kubenswrapper[4865]: I0123 11:53:08.991715 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:08Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.006688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.006753 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.006766 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.006785 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.006798 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:09Z","lastTransitionTime":"2026-01-23T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.007561 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.107632 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 17:43:11.076834572 +0000 UTC Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.109499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.109537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.109549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.109570 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.109584 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:09Z","lastTransitionTime":"2026-01-23T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.117990 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.118157 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.118479 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:09 crc kubenswrapper[4865]: E0123 11:53:09.118415 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:09 crc kubenswrapper[4865]: E0123 11:53:09.118531 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:09 crc kubenswrapper[4865]: E0123 11:53:09.118669 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.213119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.213154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.213184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.213202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.213212 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:09Z","lastTransitionTime":"2026-01-23T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.316417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.316468 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.316482 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.316502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.316517 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:09Z","lastTransitionTime":"2026-01-23T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.420270 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.420335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.420353 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.420378 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.420399 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:09Z","lastTransitionTime":"2026-01-23T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.426754 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" event={"ID":"5bd2bc34-d218-45e1-b168-a304fab36d86","Type":"ContainerStarted","Data":"1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.426815 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" event={"ID":"5bd2bc34-d218-45e1-b168-a304fab36d86","Type":"ContainerStarted","Data":"8fb45f689e7db3da01f304bf9288fc538033919d985f0aadecd7c3f705c57119"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.435194 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/1.log" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.442868 4865 scope.go:117] "RemoveContainer" containerID="84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7" Jan 23 11:53:09 crc kubenswrapper[4865]: E0123 11:53:09.443129 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.463090 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.479898 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.500526 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"perator-58b4c7f79c-55gtf in node crc\\\\nI0123 11:53:07.308217 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 11:53:07.308222 6215 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0123 11:53:07.308137 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0123 11:53:07.308170 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z]\\\\nI0123 11:53:07.308220 6215 obj_retry.go:386] R\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.518757 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.523811 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.523975 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.524069 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.524166 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.524234 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:09Z","lastTransitionTime":"2026-01-23T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.533725 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.549439 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.563761 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.579136 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.593932 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.607944 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.623333 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.627134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.627268 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.627358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.627442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.627515 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:09Z","lastTransitionTime":"2026-01-23T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.642553 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.657139 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.666743 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.679436 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.720819 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-n76rp"] Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.721721 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:09 crc kubenswrapper[4865]: E0123 11:53:09.721877 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.730440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.730495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.730510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.730530 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.730573 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:09Z","lastTransitionTime":"2026-01-23T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.738943 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.750914 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.765188 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.776526 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.788386 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.800123 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.813343 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.823912 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.830398 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fprs\" (UniqueName: \"kubernetes.io/projected/a15fb93f-eb63-4a8c-bec6-20bed7300dca-kube-api-access-9fprs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.830517 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.833214 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.833249 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.833259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.833275 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.833287 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:09Z","lastTransitionTime":"2026-01-23T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.835548 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.848749 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.863270 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.876383 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.896561 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"perator-58b4c7f79c-55gtf in node crc\\\\nI0123 11:53:07.308217 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 11:53:07.308222 6215 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0123 11:53:07.308137 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0123 11:53:07.308170 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z]\\\\nI0123 11:53:07.308220 6215 obj_retry.go:386] R\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.909753 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.928713 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.931393 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.931574 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fprs\" (UniqueName: \"kubernetes.io/projected/a15fb93f-eb63-4a8c-bec6-20bed7300dca-kube-api-access-9fprs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:09 crc kubenswrapper[4865]: E0123 11:53:09.931721 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:09 crc kubenswrapper[4865]: E0123 11:53:09.931868 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs podName:a15fb93f-eb63-4a8c-bec6-20bed7300dca nodeName:}" failed. No retries permitted until 2026-01-23 11:53:10.431835721 +0000 UTC m=+34.600907987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs") pod "network-metrics-daemon-n76rp" (UID: "a15fb93f-eb63-4a8c-bec6-20bed7300dca") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.935530 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.935708 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.935797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.935881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.935969 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:09Z","lastTransitionTime":"2026-01-23T11:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.944712 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:09Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:09 crc kubenswrapper[4865]: I0123 11:53:09.949699 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fprs\" (UniqueName: \"kubernetes.io/projected/a15fb93f-eb63-4a8c-bec6-20bed7300dca-kube-api-access-9fprs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.039513 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.039787 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.039850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.039926 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.040058 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:10Z","lastTransitionTime":"2026-01-23T11:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.108038 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:40:27.679566446 +0000 UTC Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.143424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.143711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.143816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.143948 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.144042 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:10Z","lastTransitionTime":"2026-01-23T11:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.253903 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.254025 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.254046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.254085 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.254112 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:10Z","lastTransitionTime":"2026-01-23T11:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.357779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.357885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.357903 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.357931 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.357953 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:10Z","lastTransitionTime":"2026-01-23T11:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.438671 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.438963 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.439076 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs podName:a15fb93f-eb63-4a8c-bec6-20bed7300dca nodeName:}" failed. No retries permitted until 2026-01-23 11:53:11.439050981 +0000 UTC m=+35.608123217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs") pod "network-metrics-daemon-n76rp" (UID: "a15fb93f-eb63-4a8c-bec6-20bed7300dca") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.451761 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" event={"ID":"5bd2bc34-d218-45e1-b168-a304fab36d86","Type":"ContainerStarted","Data":"f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b"} Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.460814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.460869 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.460881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.460905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.460923 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:10Z","lastTransitionTime":"2026-01-23T11:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.479365 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.505180 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.530393 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.551488 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.565967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.566036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.566049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.566074 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.566089 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:10Z","lastTransitionTime":"2026-01-23T11:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.569841 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.596743 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.617765 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.635061 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.654339 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.671580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.671702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.671736 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.671791 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.671820 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:10Z","lastTransitionTime":"2026-01-23T11:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.673665 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.699641 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.734275 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"perator-58b4c7f79c-55gtf in node crc\\\\nI0123 11:53:07.308217 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 11:53:07.308222 6215 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0123 11:53:07.308137 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0123 11:53:07.308170 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z]\\\\nI0123 11:53:07.308220 6215 obj_retry.go:386] R\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.755280 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.775270 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.775674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.775814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.775945 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.776064 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:10Z","lastTransitionTime":"2026-01-23T11:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.784237 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.798948 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.822181 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:10Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.843751 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.844043 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:53:26.843998045 +0000 UTC m=+51.013070311 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.879699 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.879765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.879796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.879833 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.879858 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:10Z","lastTransitionTime":"2026-01-23T11:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.946014 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.946100 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.946139 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.946179 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.946384 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.946415 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.946433 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.946514 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:26.946485385 +0000 UTC m=+51.115557621 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.946914 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.946979 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.947037 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.947055 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:26.947026249 +0000 UTC m=+51.116098475 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.947058 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.947125 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.947151 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:26.947130871 +0000 UTC m=+51.116203137 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:10 crc kubenswrapper[4865]: E0123 11:53:10.947179 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:26.947166442 +0000 UTC m=+51.116238908 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.984557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.984683 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.984703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.984733 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:10 crc kubenswrapper[4865]: I0123 11:53:10.984752 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:10Z","lastTransitionTime":"2026-01-23T11:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.088665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.088761 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.088789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.088832 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.088859 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:11Z","lastTransitionTime":"2026-01-23T11:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.108229 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:54:10.860622617 +0000 UTC Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.118239 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.118257 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:11 crc kubenswrapper[4865]: E0123 11:53:11.118450 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.118261 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:11 crc kubenswrapper[4865]: E0123 11:53:11.118534 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:11 crc kubenswrapper[4865]: E0123 11:53:11.118844 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.192457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.192560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.192587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.192663 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.192686 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:11Z","lastTransitionTime":"2026-01-23T11:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.296172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.296257 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.296278 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.296306 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.296329 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:11Z","lastTransitionTime":"2026-01-23T11:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.399982 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.400038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.400055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.400081 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.400102 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:11Z","lastTransitionTime":"2026-01-23T11:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.452302 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:11 crc kubenswrapper[4865]: E0123 11:53:11.452551 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:11 crc kubenswrapper[4865]: E0123 11:53:11.452715 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs podName:a15fb93f-eb63-4a8c-bec6-20bed7300dca nodeName:}" failed. No retries permitted until 2026-01-23 11:53:13.452683521 +0000 UTC m=+37.621755787 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs") pod "network-metrics-daemon-n76rp" (UID: "a15fb93f-eb63-4a8c-bec6-20bed7300dca") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.504129 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.504564 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.504686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.504774 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.504913 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:11Z","lastTransitionTime":"2026-01-23T11:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.607896 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.607963 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.608023 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.608049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.608067 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:11Z","lastTransitionTime":"2026-01-23T11:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.712251 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.712359 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.712384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.712416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.712441 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:11Z","lastTransitionTime":"2026-01-23T11:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.815568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.815706 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.815734 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.815769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.815790 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:11Z","lastTransitionTime":"2026-01-23T11:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.920347 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.920432 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.920458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.920494 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:11 crc kubenswrapper[4865]: I0123 11:53:11.920522 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:11Z","lastTransitionTime":"2026-01-23T11:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.024001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.024080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.024104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.024141 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.024168 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:12Z","lastTransitionTime":"2026-01-23T11:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.109060 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 12:00:29.499114614 +0000 UTC Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.117926 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:12 crc kubenswrapper[4865]: E0123 11:53:12.118201 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.127921 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.128192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.128383 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.128651 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.128862 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:12Z","lastTransitionTime":"2026-01-23T11:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.232331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.232408 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.232430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.232461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.232481 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:12Z","lastTransitionTime":"2026-01-23T11:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.335475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.335520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.335534 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.335555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.335568 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:12Z","lastTransitionTime":"2026-01-23T11:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.439125 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.439173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.439185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.439206 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.439218 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:12Z","lastTransitionTime":"2026-01-23T11:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.541504 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.541555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.541567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.541587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.541623 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:12Z","lastTransitionTime":"2026-01-23T11:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.644076 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.644285 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.644355 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.644418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.644508 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:12Z","lastTransitionTime":"2026-01-23T11:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.747228 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.747290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.747311 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.747336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.747354 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:12Z","lastTransitionTime":"2026-01-23T11:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.850561 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.850675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.850692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.851164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.851224 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:12Z","lastTransitionTime":"2026-01-23T11:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.955022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.955074 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.955090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.955116 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:12 crc kubenswrapper[4865]: I0123 11:53:12.955134 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:12Z","lastTransitionTime":"2026-01-23T11:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.058758 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.058851 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.058868 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.058895 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.058914 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:13Z","lastTransitionTime":"2026-01-23T11:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.110252 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 04:19:51.483949389 +0000 UTC Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.117149 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:13 crc kubenswrapper[4865]: E0123 11:53:13.117391 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.117678 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:13 crc kubenswrapper[4865]: E0123 11:53:13.118092 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.117684 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:13 crc kubenswrapper[4865]: E0123 11:53:13.118287 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.163184 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.163267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.163284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.163313 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.163333 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:13Z","lastTransitionTime":"2026-01-23T11:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.242846 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.264095 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.266376 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.266451 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.266470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.266505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.266525 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:13Z","lastTransitionTime":"2026-01-23T11:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.292913 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.314004 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.339060 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"perator-58b4c7f79c-55gtf in node crc\\\\nI0123 11:53:07.308217 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 11:53:07.308222 6215 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0123 11:53:07.308137 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0123 11:53:07.308170 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z]\\\\nI0123 11:53:07.308220 6215 obj_retry.go:386] R\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.354749 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.370108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.370424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.370664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.370850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.370978 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:13Z","lastTransitionTime":"2026-01-23T11:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.372431 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.387290 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.409254 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.436064 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.469439 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.473634 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:13 crc kubenswrapper[4865]: E0123 11:53:13.473879 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:13 crc kubenswrapper[4865]: E0123 11:53:13.474031 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs podName:a15fb93f-eb63-4a8c-bec6-20bed7300dca nodeName:}" failed. No retries permitted until 2026-01-23 11:53:17.473985201 +0000 UTC m=+41.643057467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs") pod "network-metrics-daemon-n76rp" (UID: "a15fb93f-eb63-4a8c-bec6-20bed7300dca") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.473900 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.474193 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.474217 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.474249 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.474272 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:13Z","lastTransitionTime":"2026-01-23T11:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.493291 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.518107 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.542267 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.561717 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.577408 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.577474 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.577495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.577528 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.577547 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:13Z","lastTransitionTime":"2026-01-23T11:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.581142 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.602710 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:13Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.680542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.680584 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.680618 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.680638 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.680652 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:13Z","lastTransitionTime":"2026-01-23T11:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.782733 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.782785 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.782804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.782828 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.782845 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:13Z","lastTransitionTime":"2026-01-23T11:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.885741 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.885796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.885813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.885836 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.885853 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:13Z","lastTransitionTime":"2026-01-23T11:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.989301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.989365 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.989382 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.989402 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:13 crc kubenswrapper[4865]: I0123 11:53:13.989414 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:13Z","lastTransitionTime":"2026-01-23T11:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.092110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.092175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.092193 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.092222 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.092243 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:14Z","lastTransitionTime":"2026-01-23T11:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.111129 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:26:13.569970012 +0000 UTC Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.118246 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:14 crc kubenswrapper[4865]: E0123 11:53:14.118513 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.194928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.194995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.195015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.195045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.195068 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:14Z","lastTransitionTime":"2026-01-23T11:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.298498 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.298573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.298592 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.298663 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.298685 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:14Z","lastTransitionTime":"2026-01-23T11:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.401515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.401582 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.401629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.401656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.401679 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:14Z","lastTransitionTime":"2026-01-23T11:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.505547 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.505662 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.505681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.505712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.505732 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:14Z","lastTransitionTime":"2026-01-23T11:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.609793 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.609902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.609927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.609961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.609990 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:14Z","lastTransitionTime":"2026-01-23T11:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.712657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.712736 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.712763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.712795 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.712819 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:14Z","lastTransitionTime":"2026-01-23T11:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.816526 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.816667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.816696 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.816733 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.816758 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:14Z","lastTransitionTime":"2026-01-23T11:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.919964 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.920045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.920062 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.920086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:14 crc kubenswrapper[4865]: I0123 11:53:14.920104 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:14Z","lastTransitionTime":"2026-01-23T11:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.023231 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.023278 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.023290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.023312 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.023332 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:15Z","lastTransitionTime":"2026-01-23T11:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.111928 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:07:19.397885968 +0000 UTC Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.117372 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.117480 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.117547 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:15 crc kubenswrapper[4865]: E0123 11:53:15.117585 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:15 crc kubenswrapper[4865]: E0123 11:53:15.117766 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:15 crc kubenswrapper[4865]: E0123 11:53:15.117893 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.125884 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.125913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.125922 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.125937 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.125947 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:15Z","lastTransitionTime":"2026-01-23T11:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.228393 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.228458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.228478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.228508 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.228528 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:15Z","lastTransitionTime":"2026-01-23T11:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.332039 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.332095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.332112 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.332133 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.332148 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:15Z","lastTransitionTime":"2026-01-23T11:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.436428 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.436491 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.436511 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.436537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.436556 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:15Z","lastTransitionTime":"2026-01-23T11:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.539932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.539981 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.539989 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.540009 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.540026 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:15Z","lastTransitionTime":"2026-01-23T11:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.643571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.643677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.643708 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.643730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.643743 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:15Z","lastTransitionTime":"2026-01-23T11:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.747823 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.747887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.747904 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.747927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.747941 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:15Z","lastTransitionTime":"2026-01-23T11:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.851770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.851849 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.851871 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.851897 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.851950 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:15Z","lastTransitionTime":"2026-01-23T11:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.955763 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.956248 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.956575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.956853 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:15 crc kubenswrapper[4865]: I0123 11:53:15.957009 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:15Z","lastTransitionTime":"2026-01-23T11:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.059818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.059896 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.059916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.059942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.059961 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:16Z","lastTransitionTime":"2026-01-23T11:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.112496 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:21:48.060685472 +0000 UTC Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.117386 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:16 crc kubenswrapper[4865]: E0123 11:53:16.117582 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.138956 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.164529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.164621 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.164638 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.164661 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.164677 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:16Z","lastTransitionTime":"2026-01-23T11:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.167263 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.198150 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"perator-58b4c7f79c-55gtf in node crc\\\\nI0123 11:53:07.308217 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 11:53:07.308222 6215 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0123 11:53:07.308137 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0123 11:53:07.308170 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z]\\\\nI0123 11:53:07.308220 6215 obj_retry.go:386] R\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.222235 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.244314 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.266021 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.268358 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.268416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.268430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.268452 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.268468 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:16Z","lastTransitionTime":"2026-01-23T11:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.292071 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.314537 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.337376 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.366113 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.371512 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.371569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.371581 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.371620 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.371642 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:16Z","lastTransitionTime":"2026-01-23T11:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.422470 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.440646 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.457981 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.473936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.473997 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.474010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.474035 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.474053 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:16Z","lastTransitionTime":"2026-01-23T11:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.478212 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.492269 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.507382 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:16Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.578093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.578156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.578174 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.578202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.578220 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:16Z","lastTransitionTime":"2026-01-23T11:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.682891 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.682967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.682982 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.683034 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.683052 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:16Z","lastTransitionTime":"2026-01-23T11:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.786542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.787220 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.787437 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.787643 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.787822 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:16Z","lastTransitionTime":"2026-01-23T11:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.891464 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.891846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.892263 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.892428 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.892554 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:16Z","lastTransitionTime":"2026-01-23T11:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.996002 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.996063 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.996080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.996105 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:16 crc kubenswrapper[4865]: I0123 11:53:16.996130 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:16Z","lastTransitionTime":"2026-01-23T11:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.100724 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.100798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.100816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.100848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.100867 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.112935 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:58:11.70607606 +0000 UTC Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.117536 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.117631 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.117649 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:17 crc kubenswrapper[4865]: E0123 11:53:17.117802 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:17 crc kubenswrapper[4865]: E0123 11:53:17.118131 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:17 crc kubenswrapper[4865]: E0123 11:53:17.118019 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.205266 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.205349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.205380 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.205415 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.205441 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.308054 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.308098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.308107 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.308124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.308135 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.410943 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.411044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.411065 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.411099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.411120 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.514811 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.514873 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.514890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.514915 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.514931 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.521963 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:17 crc kubenswrapper[4865]: E0123 11:53:17.522156 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:17 crc kubenswrapper[4865]: E0123 11:53:17.522249 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs podName:a15fb93f-eb63-4a8c-bec6-20bed7300dca nodeName:}" failed. No retries permitted until 2026-01-23 11:53:25.522222063 +0000 UTC m=+49.691294299 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs") pod "network-metrics-daemon-n76rp" (UID: "a15fb93f-eb63-4a8c-bec6-20bed7300dca") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.618330 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.618400 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.618411 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.618435 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.618448 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.721386 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.721470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.721499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.721539 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.721565 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.824253 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.824311 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.824329 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.824354 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.824373 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.926285 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.926344 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.926361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.926385 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.926402 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.970567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.970638 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.970652 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.970695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.970709 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:17 crc kubenswrapper[4865]: E0123 11:53:17.991033 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:17Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.996013 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.996064 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.996076 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.996100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:17 crc kubenswrapper[4865]: I0123 11:53:17.996114 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:17Z","lastTransitionTime":"2026-01-23T11:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: E0123 11:53:18.017037 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:18Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.022223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.022297 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.022321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.022355 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.022393 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: E0123 11:53:18.044517 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:18Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.049071 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.049130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.049148 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.049175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.049195 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: E0123 11:53:18.067551 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:18Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.072354 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.072408 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.072425 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.072453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.072471 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: E0123 11:53:18.094059 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:18Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:18 crc kubenswrapper[4865]: E0123 11:53:18.094345 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.096246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.096304 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.096325 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.096352 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.096372 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.114748 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:22:53.959830991 +0000 UTC Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.118201 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:18 crc kubenswrapper[4865]: E0123 11:53:18.118422 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.199461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.199507 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.199519 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.199540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.199554 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.303502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.303863 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.303998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.304155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.304273 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.407505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.407573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.407591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.407643 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.407661 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.510567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.510675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.510698 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.510731 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.510754 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.614010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.614071 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.614083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.614106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.614118 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.717011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.717078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.717095 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.717122 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.717143 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.820670 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.820749 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.820765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.820786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.820800 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.923652 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.923745 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.923765 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.923797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:18 crc kubenswrapper[4865]: I0123 11:53:18.923825 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:18Z","lastTransitionTime":"2026-01-23T11:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.027904 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.027970 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.027988 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.028016 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.028034 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:19Z","lastTransitionTime":"2026-01-23T11:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.115652 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:10:36.848953622 +0000 UTC Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.118101 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.118129 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.118130 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:19 crc kubenswrapper[4865]: E0123 11:53:19.118313 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:19 crc kubenswrapper[4865]: E0123 11:53:19.118497 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:19 crc kubenswrapper[4865]: E0123 11:53:19.118706 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.130538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.130582 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.130620 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.130644 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.130658 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:19Z","lastTransitionTime":"2026-01-23T11:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.233769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.233843 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.233880 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.233897 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.233911 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:19Z","lastTransitionTime":"2026-01-23T11:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.336742 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.337447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.337541 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.337657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.337813 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:19Z","lastTransitionTime":"2026-01-23T11:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.440720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.441554 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.441732 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.442049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.442228 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:19Z","lastTransitionTime":"2026-01-23T11:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.544825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.544863 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.544872 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.544888 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.544898 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:19Z","lastTransitionTime":"2026-01-23T11:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.648104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.648152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.648162 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.648181 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.648192 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:19Z","lastTransitionTime":"2026-01-23T11:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.750378 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.750447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.750457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.750474 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.750484 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:19Z","lastTransitionTime":"2026-01-23T11:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.853428 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.853510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.853528 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.853555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.853574 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:19Z","lastTransitionTime":"2026-01-23T11:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.957141 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.957209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.957233 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.957264 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:19 crc kubenswrapper[4865]: I0123 11:53:19.957291 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:19Z","lastTransitionTime":"2026-01-23T11:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.061028 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.061104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.061123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.061151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.061171 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:20Z","lastTransitionTime":"2026-01-23T11:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.116546 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:26:45.116865573 +0000 UTC Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.117638 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:20 crc kubenswrapper[4865]: E0123 11:53:20.117835 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.164266 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.164334 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.164350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.164374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.164386 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:20Z","lastTransitionTime":"2026-01-23T11:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.268166 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.268234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.268246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.268265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.268278 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:20Z","lastTransitionTime":"2026-01-23T11:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.371780 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.371840 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.371861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.371889 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.371908 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:20Z","lastTransitionTime":"2026-01-23T11:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.474859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.474961 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.475023 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.475050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.475067 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:20Z","lastTransitionTime":"2026-01-23T11:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.578029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.578093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.578107 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.578130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.578151 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:20Z","lastTransitionTime":"2026-01-23T11:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.681759 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.681873 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.681893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.681955 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.681981 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:20Z","lastTransitionTime":"2026-01-23T11:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.785319 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.785370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.785380 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.785403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.785420 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:20Z","lastTransitionTime":"2026-01-23T11:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.889404 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.889479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.889498 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.889528 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.889549 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:20Z","lastTransitionTime":"2026-01-23T11:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.992950 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.993053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.993072 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.993098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:20 crc kubenswrapper[4865]: I0123 11:53:20.993117 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:20Z","lastTransitionTime":"2026-01-23T11:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.096790 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.096886 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.096906 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.096930 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.096949 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:21Z","lastTransitionTime":"2026-01-23T11:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.117415 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.117489 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.117436 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.117355 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:02:23.66396666 +0000 UTC Jan 23 11:53:21 crc kubenswrapper[4865]: E0123 11:53:21.117682 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:21 crc kubenswrapper[4865]: E0123 11:53:21.117772 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:21 crc kubenswrapper[4865]: E0123 11:53:21.117934 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.201365 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.201437 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.201462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.201497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.201521 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:21Z","lastTransitionTime":"2026-01-23T11:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.305444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.305522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.305547 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.305575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.305630 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:21Z","lastTransitionTime":"2026-01-23T11:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.409529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.409887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.409991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.410083 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.410235 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:21Z","lastTransitionTime":"2026-01-23T11:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.513345 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.513419 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.513438 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.513467 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.513485 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:21Z","lastTransitionTime":"2026-01-23T11:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.617096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.617178 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.617191 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.617211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.617225 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:21Z","lastTransitionTime":"2026-01-23T11:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.721014 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.721273 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.721420 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.721827 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.721976 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:21Z","lastTransitionTime":"2026-01-23T11:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.826090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.826160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.826181 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.826211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.826231 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:21Z","lastTransitionTime":"2026-01-23T11:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.930015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.930775 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.930814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.930845 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:21 crc kubenswrapper[4865]: I0123 11:53:21.930862 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:21Z","lastTransitionTime":"2026-01-23T11:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.034706 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.034782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.034804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.034835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.034860 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:22Z","lastTransitionTime":"2026-01-23T11:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.117543 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.117821 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:48:44.652029512 +0000 UTC Jan 23 11:53:22 crc kubenswrapper[4865]: E0123 11:53:22.119441 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.119647 4865 scope.go:117] "RemoveContainer" containerID="84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.140003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.140089 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.140119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.140155 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.140183 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:22Z","lastTransitionTime":"2026-01-23T11:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.243493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.243543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.243556 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.243578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.243590 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:22Z","lastTransitionTime":"2026-01-23T11:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.346915 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.346955 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.346965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.347003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.347014 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:22Z","lastTransitionTime":"2026-01-23T11:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.449418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.449462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.449473 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.449492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.449503 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:22Z","lastTransitionTime":"2026-01-23T11:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.505791 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/1.log" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.510039 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.510935 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.525240 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.537139 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.553795 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.553860 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.553873 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.553911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.553944 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:22Z","lastTransitionTime":"2026-01-23T11:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.555411 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.572213 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.598678 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"perator-58b4c7f79c-55gtf in node crc\\\\nI0123 11:53:07.308217 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 11:53:07.308222 6215 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0123 11:53:07.308137 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0123 11:53:07.308170 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z]\\\\nI0123 11:53:07.308220 6215 obj_retry.go:386] R\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.610820 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.625013 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.664304 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.666416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.666462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.666474 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.666493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.666504 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:22Z","lastTransitionTime":"2026-01-23T11:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.696413 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.713625 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.729228 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.744088 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.757887 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.768511 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.768543 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.768553 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.768572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.768583 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:22Z","lastTransitionTime":"2026-01-23T11:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.775172 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.791343 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.805245 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:22Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.871038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.871091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.871104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.871121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.871132 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:22Z","lastTransitionTime":"2026-01-23T11:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.973770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.973820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.973829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.973847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:22 crc kubenswrapper[4865]: I0123 11:53:22.973857 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:22Z","lastTransitionTime":"2026-01-23T11:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.076947 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.076995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.077011 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.077031 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.077044 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:23Z","lastTransitionTime":"2026-01-23T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.117461 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.117583 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.117626 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:23 crc kubenswrapper[4865]: E0123 11:53:23.117658 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:23 crc kubenswrapper[4865]: E0123 11:53:23.117779 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:23 crc kubenswrapper[4865]: E0123 11:53:23.117931 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.119444 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:29:09.051749754 +0000 UTC Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.180287 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.180357 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.180372 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.180393 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.180409 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:23Z","lastTransitionTime":"2026-01-23T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.283627 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.283670 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.283680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.283695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.283708 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:23Z","lastTransitionTime":"2026-01-23T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.387595 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.387662 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.387679 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.387712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.387726 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:23Z","lastTransitionTime":"2026-01-23T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.490864 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.490906 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.490916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.490930 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.490941 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:23Z","lastTransitionTime":"2026-01-23T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.517764 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/2.log" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.518761 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/1.log" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.522698 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b" exitCode=1 Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.522749 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b"} Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.522795 4865 scope.go:117] "RemoveContainer" containerID="84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.523734 4865 scope.go:117] "RemoveContainer" containerID="146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b" Jan 23 11:53:23 crc kubenswrapper[4865]: E0123 11:53:23.523957 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.542761 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.565194 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.584061 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.594156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.594207 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.594217 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.594240 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.594252 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:23Z","lastTransitionTime":"2026-01-23T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.600733 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.617562 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.632321 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.647208 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.660490 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.672239 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.686674 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.697250 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.697321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.697342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.697370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.697392 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:23Z","lastTransitionTime":"2026-01-23T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.704916 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.730062 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.751134 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.789288 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84d57babaaaf0c70882759e20fb5d978257a66e49d2271dfe9d703947b412bc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:07Z\\\",\\\"message\\\":\\\"perator-58b4c7f79c-55gtf in node crc\\\\nI0123 11:53:07.308217 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0123 11:53:07.308222 6215 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0123 11:53:07.308137 6215 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF0123 11:53:07.308170 6215 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:07Z is after 2025-08-24T17:21:41Z]\\\\nI0123 11:53:07.308220 6215 obj_retry.go:386] R\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:23Z\\\",\\\"message\\\":\\\"78b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 11:53:23.093765 6412 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 11:53:23.093738 6412 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 11:53:23.093861 6412 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.802056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.802401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.802527 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.802717 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.802903 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:23Z","lastTransitionTime":"2026-01-23T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.813825 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.836578 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:23Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.907395 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.907951 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.908314 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.908522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:23 crc kubenswrapper[4865]: I0123 11:53:23.908698 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:23Z","lastTransitionTime":"2026-01-23T11:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.012246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.012301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.012317 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.012341 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.012358 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:24Z","lastTransitionTime":"2026-01-23T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.115938 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.116032 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.116046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.116069 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.116082 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:24Z","lastTransitionTime":"2026-01-23T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.117335 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:24 crc kubenswrapper[4865]: E0123 11:53:24.117549 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.120862 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:24:59.667305831 +0000 UTC Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.220169 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.220226 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.220239 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.220260 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.220274 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:24Z","lastTransitionTime":"2026-01-23T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.323171 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.323243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.323261 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.323288 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.323306 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:24Z","lastTransitionTime":"2026-01-23T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.427345 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.427442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.427466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.427497 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.427514 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:24Z","lastTransitionTime":"2026-01-23T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.530298 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.530369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.530391 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.530419 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.530443 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:24Z","lastTransitionTime":"2026-01-23T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.530885 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/2.log" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.537579 4865 scope.go:117] "RemoveContainer" containerID="146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b" Jan 23 11:53:24 crc kubenswrapper[4865]: E0123 11:53:24.537949 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.558499 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.578790 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.607771 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:23Z\\\",\\\"message\\\":\\\"78b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 11:53:23.093765 6412 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 11:53:23.093738 6412 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 11:53:23.093861 6412 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.621565 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.634028 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.634151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.634213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.634287 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.634313 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:24Z","lastTransitionTime":"2026-01-23T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.640310 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.657732 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.681575 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.705098 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.725970 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.737561 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.737638 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.737651 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.737669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.737681 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:24Z","lastTransitionTime":"2026-01-23T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.748511 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.771245 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.791321 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.810520 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.825544 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.840471 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.840538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.840551 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.840570 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.840585 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:24Z","lastTransitionTime":"2026-01-23T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.842652 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.858030 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:24Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.942991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.943086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.943110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.943152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:24 crc kubenswrapper[4865]: I0123 11:53:24.943176 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:24Z","lastTransitionTime":"2026-01-23T11:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.046139 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.046220 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.046238 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.046268 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.046287 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:25Z","lastTransitionTime":"2026-01-23T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.117267 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.117335 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.117369 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:25 crc kubenswrapper[4865]: E0123 11:53:25.117454 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:25 crc kubenswrapper[4865]: E0123 11:53:25.117591 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:25 crc kubenswrapper[4865]: E0123 11:53:25.117826 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.121289 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:57:09.505918394 +0000 UTC Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.149262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.149350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.149373 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.149407 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.149435 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:25Z","lastTransitionTime":"2026-01-23T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.253297 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.253356 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.253369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.253390 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.253405 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:25Z","lastTransitionTime":"2026-01-23T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.356792 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.356860 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.356872 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.356905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.356920 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:25Z","lastTransitionTime":"2026-01-23T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.460768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.460825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.460838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.460860 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.460877 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:25Z","lastTransitionTime":"2026-01-23T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.564557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.564679 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.564715 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.564745 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.564766 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:25Z","lastTransitionTime":"2026-01-23T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.621775 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:25 crc kubenswrapper[4865]: E0123 11:53:25.622067 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:25 crc kubenswrapper[4865]: E0123 11:53:25.622168 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs podName:a15fb93f-eb63-4a8c-bec6-20bed7300dca nodeName:}" failed. No retries permitted until 2026-01-23 11:53:41.622141079 +0000 UTC m=+65.791213335 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs") pod "network-metrics-daemon-n76rp" (UID: "a15fb93f-eb63-4a8c-bec6-20bed7300dca") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.668733 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.668829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.668853 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.668881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.668902 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:25Z","lastTransitionTime":"2026-01-23T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.772051 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.772100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.772111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.772129 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.772142 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:25Z","lastTransitionTime":"2026-01-23T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.874947 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.874991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.875001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.875021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.875034 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:25Z","lastTransitionTime":"2026-01-23T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.979108 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.979164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.979183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.979241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:25 crc kubenswrapper[4865]: I0123 11:53:25.979263 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:25Z","lastTransitionTime":"2026-01-23T11:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.082937 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.083055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.083075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.083101 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.083118 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:26Z","lastTransitionTime":"2026-01-23T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.118136 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:26 crc kubenswrapper[4865]: E0123 11:53:26.118493 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.125046 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:40:22.356917542 +0000 UTC Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.146038 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.171751 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.186966 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.187029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.187043 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.187089 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.187167 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:26Z","lastTransitionTime":"2026-01-23T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.210057 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:23Z\\\",\\\"message\\\":\\\"78b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 11:53:23.093765 6412 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 11:53:23.093738 6412 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 11:53:23.093861 6412 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.228116 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.250097 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.266193 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.289735 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.291193 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.291233 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.291448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.291480 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.291798 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:26Z","lastTransitionTime":"2026-01-23T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.315106 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.339358 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.395020 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.395094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.395114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.395143 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.395163 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:26Z","lastTransitionTime":"2026-01-23T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.416330 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.453528 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.474236 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.491991 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.498117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.498176 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.498192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.498213 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.498228 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:26Z","lastTransitionTime":"2026-01-23T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.505028 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.513676 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.524893 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:26Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.600517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.600632 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.600655 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.600694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.600715 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:26Z","lastTransitionTime":"2026-01-23T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.704404 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.704460 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.704478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.704502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.704521 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:26Z","lastTransitionTime":"2026-01-23T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.810698 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.810843 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.810875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.810910 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.810935 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:26Z","lastTransitionTime":"2026-01-23T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.914712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.915076 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.915178 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.915283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.915369 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:26Z","lastTransitionTime":"2026-01-23T11:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:26 crc kubenswrapper[4865]: I0123 11:53:26.943559 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:53:26 crc kubenswrapper[4865]: E0123 11:53:26.944028 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:53:58.94399546 +0000 UTC m=+83.113067726 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.019138 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.019199 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.019220 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.019248 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.019268 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:27Z","lastTransitionTime":"2026-01-23T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.045377 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.045450 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.045534 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.045576 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.046743 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.046859 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:59.046826339 +0000 UTC m=+83.215898595 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.047198 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.047270 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:59.047248939 +0000 UTC m=+83.216321195 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.047573 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.047643 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.047673 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.047724 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:59.047708639 +0000 UTC m=+83.216780905 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.049856 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.049889 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.049905 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.049960 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 11:53:59.049942863 +0000 UTC m=+83.219015119 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.118155 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.118207 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.118155 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.118461 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.118675 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:27 crc kubenswrapper[4865]: E0123 11:53:27.118820 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.122973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.123134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.123253 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.123369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.123585 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:27Z","lastTransitionTime":"2026-01-23T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.126522 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:32:09.430745885 +0000 UTC Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.227270 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.227895 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.228122 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.228488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.228684 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:27Z","lastTransitionTime":"2026-01-23T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.332212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.332705 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.332914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.333100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.333279 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:27Z","lastTransitionTime":"2026-01-23T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.436277 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.436808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.436965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.437118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.437262 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:27Z","lastTransitionTime":"2026-01-23T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.540802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.540855 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.540866 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.540885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.540903 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:27Z","lastTransitionTime":"2026-01-23T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.644424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.644494 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.644510 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.644538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.644555 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:27Z","lastTransitionTime":"2026-01-23T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.747494 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.747654 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.747676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.747704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.747727 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:27Z","lastTransitionTime":"2026-01-23T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.851117 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.851202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.851227 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.851258 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.851294 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:27Z","lastTransitionTime":"2026-01-23T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.955025 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.955078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.955096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.955120 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:27 crc kubenswrapper[4865]: I0123 11:53:27.955139 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:27Z","lastTransitionTime":"2026-01-23T11:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.059546 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.059588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.059623 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.059641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.059653 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.117431 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:28 crc kubenswrapper[4865]: E0123 11:53:28.117641 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.127550 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:57:28.189002463 +0000 UTC Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.162453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.162508 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.162559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.162592 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.162645 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.265941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.266060 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.266079 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.266102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.266120 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.297086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.297382 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.297485 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.297660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.297887 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: E0123 11:53:28.321915 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:28Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.327275 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.327419 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.327533 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.327661 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.327775 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: E0123 11:53:28.348156 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:28Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.352558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.352633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.352656 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.352690 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.352713 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: E0123 11:53:28.374902 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:28Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.380161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.380325 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.380388 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.380466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.380528 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: E0123 11:53:28.400201 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:28Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.406639 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.406715 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.406743 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.406772 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.406791 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: E0123 11:53:28.432701 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:28Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:28 crc kubenswrapper[4865]: E0123 11:53:28.432939 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.436256 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.436331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.436351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.436380 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.436401 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.540324 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.540616 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.540700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.540829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.540926 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.643666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.643744 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.643754 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.643791 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.643804 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.748757 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.748815 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.748831 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.748857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.748875 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.851751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.851802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.851813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.851842 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.851855 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.954888 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.955724 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.955921 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.955970 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:28 crc kubenswrapper[4865]: I0123 11:53:28.955999 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:28Z","lastTransitionTime":"2026-01-23T11:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.079915 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.079995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.080015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.080043 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.080063 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:29Z","lastTransitionTime":"2026-01-23T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.117407 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:29 crc kubenswrapper[4865]: E0123 11:53:29.117694 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.117722 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.117886 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:29 crc kubenswrapper[4865]: E0123 11:53:29.117991 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:29 crc kubenswrapper[4865]: E0123 11:53:29.118206 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.128488 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:27:18.539409883 +0000 UTC Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.183252 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.183290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.183299 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.183336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.183348 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:29Z","lastTransitionTime":"2026-01-23T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.287105 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.287161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.287183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.287209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.287228 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:29Z","lastTransitionTime":"2026-01-23T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.390944 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.391234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.391262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.391298 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.391321 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:29Z","lastTransitionTime":"2026-01-23T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.494492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.494551 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.494563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.494584 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.494621 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:29Z","lastTransitionTime":"2026-01-23T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.597534 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.597648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.597669 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.597695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.597713 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:29Z","lastTransitionTime":"2026-01-23T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.701587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.701660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.701671 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.701685 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.701696 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:29Z","lastTransitionTime":"2026-01-23T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.804965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.805014 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.805029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.805049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.805067 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:29Z","lastTransitionTime":"2026-01-23T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.907949 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.908007 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.908020 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.908044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:29 crc kubenswrapper[4865]: I0123 11:53:29.908115 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:29Z","lastTransitionTime":"2026-01-23T11:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.011030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.011096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.011111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.011134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.011152 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:30Z","lastTransitionTime":"2026-01-23T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.116355 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.116424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.116436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.116458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.116470 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:30Z","lastTransitionTime":"2026-01-23T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.117313 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:30 crc kubenswrapper[4865]: E0123 11:53:30.117430 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.129335 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:21:09.527863238 +0000 UTC Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.219051 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.219087 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.219096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.219111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.219120 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:30Z","lastTransitionTime":"2026-01-23T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.323048 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.323103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.323125 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.323151 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.323170 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:30Z","lastTransitionTime":"2026-01-23T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.427004 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.427059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.427079 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.427102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.427119 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:30Z","lastTransitionTime":"2026-01-23T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.529970 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.530018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.530031 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.530047 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.530065 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:30Z","lastTransitionTime":"2026-01-23T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.633310 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.633362 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.633370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.633388 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.633398 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:30Z","lastTransitionTime":"2026-01-23T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.736384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.736440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.736455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.736475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.736487 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:30Z","lastTransitionTime":"2026-01-23T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.839342 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.839384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.839395 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.839414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.839425 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:30Z","lastTransitionTime":"2026-01-23T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.943052 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.943090 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.943100 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.943118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:30 crc kubenswrapper[4865]: I0123 11:53:30.943128 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:30Z","lastTransitionTime":"2026-01-23T11:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.045076 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.045134 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.045148 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.045203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.045222 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:31Z","lastTransitionTime":"2026-01-23T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.117789 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.117907 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:31 crc kubenswrapper[4865]: E0123 11:53:31.117992 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.117789 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:31 crc kubenswrapper[4865]: E0123 11:53:31.118126 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:31 crc kubenswrapper[4865]: E0123 11:53:31.118227 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.129931 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:10:23.529728836 +0000 UTC Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.147857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.147906 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.147920 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.147950 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.147968 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:31Z","lastTransitionTime":"2026-01-23T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.251058 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.251110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.251120 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.251138 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.251148 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:31Z","lastTransitionTime":"2026-01-23T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.355729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.355822 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.355873 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.355916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.355944 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:31Z","lastTransitionTime":"2026-01-23T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.460585 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.461630 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.461844 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.462003 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.462141 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:31Z","lastTransitionTime":"2026-01-23T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.565135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.565185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.565205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.565230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.565249 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:31Z","lastTransitionTime":"2026-01-23T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.668750 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.668808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.668818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.668835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.668846 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:31Z","lastTransitionTime":"2026-01-23T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.772233 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.772274 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.772283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.772299 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.772312 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:31Z","lastTransitionTime":"2026-01-23T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.875106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.875196 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.875207 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.875229 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.875244 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:31Z","lastTransitionTime":"2026-01-23T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.965150 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.978928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.978966 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.978995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.979017 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.979029 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:31Z","lastTransitionTime":"2026-01-23T11:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.982401 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 11:53:31 crc kubenswrapper[4865]: I0123 11:53:31.988715 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:31Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.013463 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.029197 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.048591 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.065918 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.079625 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.081697 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.081811 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.081830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.081852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.081879 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:32Z","lastTransitionTime":"2026-01-23T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.100571 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.116184 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.117982 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:32 crc kubenswrapper[4865]: E0123 11:53:32.118278 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.129033 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.130351 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:01:13.366896729 +0000 UTC Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.148290 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.166739 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.183936 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.185586 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.185641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.185653 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.185675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.185693 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:32Z","lastTransitionTime":"2026-01-23T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.211150 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:23Z\\\",\\\"message\\\":\\\"78b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 11:53:23.093765 6412 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 11:53:23.093738 6412 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 11:53:23.093861 6412 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.224068 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.238434 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.252586 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:32Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.289113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.289442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.289535 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.289632 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.289710 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:32Z","lastTransitionTime":"2026-01-23T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.392748 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.393165 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.393409 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.393644 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.393826 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:32Z","lastTransitionTime":"2026-01-23T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.497101 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.497145 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.497164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.497183 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.497197 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:32Z","lastTransitionTime":"2026-01-23T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.600700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.600759 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.600770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.600793 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.600805 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:32Z","lastTransitionTime":"2026-01-23T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.703240 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.703289 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.703301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.703322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.703337 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:32Z","lastTransitionTime":"2026-01-23T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.806470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.806839 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.806913 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.806993 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.807058 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:32Z","lastTransitionTime":"2026-01-23T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.915751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.916109 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.916196 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.916343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:32 crc kubenswrapper[4865]: I0123 11:53:32.916438 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:32Z","lastTransitionTime":"2026-01-23T11:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.019225 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.019309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.019325 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.019349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.019364 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:33Z","lastTransitionTime":"2026-01-23T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.117547 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.117679 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:33 crc kubenswrapper[4865]: E0123 11:53:33.117738 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:33 crc kubenswrapper[4865]: E0123 11:53:33.117897 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.117956 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:33 crc kubenswrapper[4865]: E0123 11:53:33.118010 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.122230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.122285 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.122304 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.122328 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.122346 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:33Z","lastTransitionTime":"2026-01-23T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.130720 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:12:33.310272749 +0000 UTC Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.224658 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.224702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.224713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.224729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.224743 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:33Z","lastTransitionTime":"2026-01-23T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.328594 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.328709 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.328727 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.328752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.328773 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:33Z","lastTransitionTime":"2026-01-23T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.431930 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.431999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.432018 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.432046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.432065 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:33Z","lastTransitionTime":"2026-01-23T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.534962 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.535012 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.535025 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.535049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.535062 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:33Z","lastTransitionTime":"2026-01-23T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.638898 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.638970 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.638990 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.639017 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.639038 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:33Z","lastTransitionTime":"2026-01-23T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.742305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.742887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.743057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.743260 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.743457 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:33Z","lastTransitionTime":"2026-01-23T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.847367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.848058 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.848296 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.848540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.848838 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:33Z","lastTransitionTime":"2026-01-23T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.953557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.953637 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.953651 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.953672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:33 crc kubenswrapper[4865]: I0123 11:53:33.953683 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:33Z","lastTransitionTime":"2026-01-23T11:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.057139 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.057234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.057252 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.057284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.057303 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:34Z","lastTransitionTime":"2026-01-23T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.118044 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:34 crc kubenswrapper[4865]: E0123 11:53:34.118317 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.131248 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 18:41:23.890042601 +0000 UTC Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.160700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.161086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.161194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.161309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.161429 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:34Z","lastTransitionTime":"2026-01-23T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.264756 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.264797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.264807 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.264826 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.264839 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:34Z","lastTransitionTime":"2026-01-23T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.368691 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.369098 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.369217 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.369304 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.369391 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:34Z","lastTransitionTime":"2026-01-23T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.472779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.472857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.472881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.472911 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.472932 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:34Z","lastTransitionTime":"2026-01-23T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.575798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.576092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.576173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.576241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.576304 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:34Z","lastTransitionTime":"2026-01-23T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.679370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.679426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.679445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.679471 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.679489 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:34Z","lastTransitionTime":"2026-01-23T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.783174 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.783224 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.783235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.783255 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.783270 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:34Z","lastTransitionTime":"2026-01-23T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.886055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.886138 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.886156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.886187 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.886206 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:34Z","lastTransitionTime":"2026-01-23T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.990249 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.990293 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.990327 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.990350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:34 crc kubenswrapper[4865]: I0123 11:53:34.990368 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:34Z","lastTransitionTime":"2026-01-23T11:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.093914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.093975 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.093992 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.094015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.094031 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:35Z","lastTransitionTime":"2026-01-23T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.117308 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.117407 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:35 crc kubenswrapper[4865]: E0123 11:53:35.117460 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.117308 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:35 crc kubenswrapper[4865]: E0123 11:53:35.117680 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:35 crc kubenswrapper[4865]: E0123 11:53:35.117838 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.132577 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:41:20.087689377 +0000 UTC Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.197061 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.197198 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.197219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.197241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.197278 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:35Z","lastTransitionTime":"2026-01-23T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.300119 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.300177 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.300194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.300220 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.300235 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:35Z","lastTransitionTime":"2026-01-23T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.407771 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.408075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.408147 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.408223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.408293 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:35Z","lastTransitionTime":"2026-01-23T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.510515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.510844 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.510953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.511046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.511126 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:35Z","lastTransitionTime":"2026-01-23T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.615135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.615549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.615711 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.615831 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.615928 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:35Z","lastTransitionTime":"2026-01-23T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.718737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.718782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.718792 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.718810 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.718821 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:35Z","lastTransitionTime":"2026-01-23T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.822179 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.822256 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.822280 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.822314 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.822341 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:35Z","lastTransitionTime":"2026-01-23T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.925712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.925817 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.925848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.925884 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:35 crc kubenswrapper[4865]: I0123 11:53:35.925905 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:35Z","lastTransitionTime":"2026-01-23T11:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.030895 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.030994 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.031015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.031045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.031064 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:36Z","lastTransitionTime":"2026-01-23T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.117997 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:36 crc kubenswrapper[4865]: E0123 11:53:36.118156 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.119412 4865 scope.go:117] "RemoveContainer" containerID="146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b" Jan 23 11:53:36 crc kubenswrapper[4865]: E0123 11:53:36.119777 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.133115 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:10:37.633172868 +0000 UTC Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.137524 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.137559 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.137572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.137592 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.137626 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:36Z","lastTransitionTime":"2026-01-23T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.138460 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.161415 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.188094 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:23Z\\\",\\\"message\\\":\\\"78b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 11:53:23.093765 6412 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 11:53:23.093738 6412 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 11:53:23.093861 6412 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.201505 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.223675 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.238647 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.241383 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.241423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.241438 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.241463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.241479 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:36Z","lastTransitionTime":"2026-01-23T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.254224 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.270530 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.284294 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.298125 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.316385 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.333636 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.344727 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.345097 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.345175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.345262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.345348 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:36Z","lastTransitionTime":"2026-01-23T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.352306 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.367539 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc4643c-4a19-4ead-b3e5-ef1b36053efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13b15729a5822849eeeb33338a7049e7899e43c958eb3ee6acb5fbe4f4bab8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e704eefb278d79695b1251a512db41c42cb4fe7f2b1a8a1a14ce8fea9b46b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.381229 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.392967 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.406317 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:36Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.447445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.447519 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.447537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.447560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.447577 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:36Z","lastTransitionTime":"2026-01-23T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.550675 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.550743 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.550761 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.550786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.550802 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:36Z","lastTransitionTime":"2026-01-23T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.653508 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.653555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.653593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.653666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.653680 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:36Z","lastTransitionTime":"2026-01-23T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.756312 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.756367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.756380 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.756402 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.756416 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:36Z","lastTransitionTime":"2026-01-23T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.860010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.860086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.860105 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.860136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.860154 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:36Z","lastTransitionTime":"2026-01-23T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.963432 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.963491 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.963506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.963524 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:36 crc kubenswrapper[4865]: I0123 11:53:36.963537 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:36Z","lastTransitionTime":"2026-01-23T11:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.066704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.066758 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.066769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.066788 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.066800 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:37Z","lastTransitionTime":"2026-01-23T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.117365 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.117423 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.117588 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:37 crc kubenswrapper[4865]: E0123 11:53:37.117574 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:37 crc kubenswrapper[4865]: E0123 11:53:37.117887 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:37 crc kubenswrapper[4865]: E0123 11:53:37.117999 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.134465 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:16:02.553463579 +0000 UTC Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.170364 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.170436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.170455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.170486 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.170508 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:37Z","lastTransitionTime":"2026-01-23T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.273957 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.274034 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.274049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.274074 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.274089 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:37Z","lastTransitionTime":"2026-01-23T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.377376 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.377442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.377456 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.377478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.377492 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:37Z","lastTransitionTime":"2026-01-23T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.480927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.480979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.480991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.481012 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.481024 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:37Z","lastTransitionTime":"2026-01-23T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.583616 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.583664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.583678 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.583699 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.583713 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:37Z","lastTransitionTime":"2026-01-23T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.687683 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.687780 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.687795 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.687818 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.687835 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:37Z","lastTransitionTime":"2026-01-23T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.800308 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.800525 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.800558 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.800587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.800637 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:37Z","lastTransitionTime":"2026-01-23T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.903540 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.903571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.903579 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.903595 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:37 crc kubenswrapper[4865]: I0123 11:53:37.903643 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:37Z","lastTransitionTime":"2026-01-23T11:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.007457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.007504 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.007518 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.007547 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.007621 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.111209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.111265 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.111280 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.111303 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.111320 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.117261 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:38 crc kubenswrapper[4865]: E0123 11:53:38.117373 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.135630 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:52:03.687248424 +0000 UTC Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.213847 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.213882 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.213890 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.213905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.213914 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.315649 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.315683 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.315694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.315713 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.315724 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.418252 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.418290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.418302 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.418323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.418337 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.520686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.520723 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.520734 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.520750 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.520760 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.623209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.623290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.623307 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.623331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.623348 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.726414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.726539 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.726557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.726831 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.726941 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.793728 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.793799 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.793932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.794164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.794238 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: E0123 11:53:38.812259 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.816932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.816980 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.816994 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.817012 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.817026 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: E0123 11:53:38.832442 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.837023 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.837057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.837068 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.837087 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.837099 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: E0123 11:53:38.851652 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.855785 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.855849 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.855861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.855884 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.855899 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: E0123 11:53:38.869409 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.873468 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.873498 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.873508 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.873525 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.873535 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: E0123 11:53:38.884971 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:38Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:38 crc kubenswrapper[4865]: E0123 11:53:38.885079 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.886929 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.886973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.886984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.886999 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.887011 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.990045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.990096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.990110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.990130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:38 crc kubenswrapper[4865]: I0123 11:53:38.990142 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:38Z","lastTransitionTime":"2026-01-23T11:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.092262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.092317 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.092333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.092356 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.092371 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:39Z","lastTransitionTime":"2026-01-23T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.117760 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.117788 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.117842 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:39 crc kubenswrapper[4865]: E0123 11:53:39.117909 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:39 crc kubenswrapper[4865]: E0123 11:53:39.117990 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:39 crc kubenswrapper[4865]: E0123 11:53:39.118065 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.175899 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 06:32:42.496121621 +0000 UTC Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.194407 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.194466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.194482 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.194508 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.194528 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:39Z","lastTransitionTime":"2026-01-23T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.297724 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.297789 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.297802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.297824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.297837 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:39Z","lastTransitionTime":"2026-01-23T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.400859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.400916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.400932 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.400956 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.400973 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:39Z","lastTransitionTime":"2026-01-23T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.503530 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.503572 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.503584 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.503616 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.503628 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:39Z","lastTransitionTime":"2026-01-23T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.605369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.605415 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.605426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.605446 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.605458 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:39Z","lastTransitionTime":"2026-01-23T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.708026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.708077 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.708087 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.708104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.708113 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:39Z","lastTransitionTime":"2026-01-23T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.810267 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.810308 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.810316 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.810334 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.810355 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:39Z","lastTransitionTime":"2026-01-23T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.912440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.912488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.912503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.912522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:39 crc kubenswrapper[4865]: I0123 11:53:39.912538 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:39Z","lastTransitionTime":"2026-01-23T11:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.014649 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.014700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.014710 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.014730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.014742 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:40Z","lastTransitionTime":"2026-01-23T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.117185 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:40 crc kubenswrapper[4865]: E0123 11:53:40.117335 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.119334 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.119363 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.119480 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.119508 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.119525 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:40Z","lastTransitionTime":"2026-01-23T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.176142 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 07:43:27.687494712 +0000 UTC Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.222804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.222855 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.222868 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.222889 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.222903 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:40Z","lastTransitionTime":"2026-01-23T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.325794 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.325856 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.325869 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.325894 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.325908 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:40Z","lastTransitionTime":"2026-01-23T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.429539 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.429613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.429625 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.429644 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.429655 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:40Z","lastTransitionTime":"2026-01-23T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.532152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.532211 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.532223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.532246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.532260 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:40Z","lastTransitionTime":"2026-01-23T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.634917 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.634977 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.634989 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.635009 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.635315 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:40Z","lastTransitionTime":"2026-01-23T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.738219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.738251 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.738259 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.738274 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.738289 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:40Z","lastTransitionTime":"2026-01-23T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.841517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.841578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.841615 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.841641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.841655 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:40Z","lastTransitionTime":"2026-01-23T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.944256 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.944332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.944353 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.944382 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:40 crc kubenswrapper[4865]: I0123 11:53:40.944402 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:40Z","lastTransitionTime":"2026-01-23T11:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.047427 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.047495 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.047505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.047525 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.047535 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:41Z","lastTransitionTime":"2026-01-23T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.117727 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.117764 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.117749 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:41 crc kubenswrapper[4865]: E0123 11:53:41.117977 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:41 crc kubenswrapper[4865]: E0123 11:53:41.118077 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:41 crc kubenswrapper[4865]: E0123 11:53:41.118279 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.150116 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.150159 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.150173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.150232 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.150250 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:41Z","lastTransitionTime":"2026-01-23T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.176460 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 16:00:32.256423994 +0000 UTC Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.253229 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.253294 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.253307 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.253329 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.253341 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:41Z","lastTransitionTime":"2026-01-23T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.356146 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.356193 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.356203 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.356221 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.356231 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:41Z","lastTransitionTime":"2026-01-23T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.460015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.460062 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.460073 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.460093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.460105 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:41Z","lastTransitionTime":"2026-01-23T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.563387 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.563448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.563463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.563488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.563501 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:41Z","lastTransitionTime":"2026-01-23T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.623813 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:41 crc kubenswrapper[4865]: E0123 11:53:41.624016 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:41 crc kubenswrapper[4865]: E0123 11:53:41.624127 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs podName:a15fb93f-eb63-4a8c-bec6-20bed7300dca nodeName:}" failed. No retries permitted until 2026-01-23 11:54:13.624101379 +0000 UTC m=+97.793173615 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs") pod "network-metrics-daemon-n76rp" (UID: "a15fb93f-eb63-4a8c-bec6-20bed7300dca") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.666010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.666076 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.666101 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.666126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.666140 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:41Z","lastTransitionTime":"2026-01-23T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.769804 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.769868 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.769885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.769910 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.769925 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:41Z","lastTransitionTime":"2026-01-23T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.872877 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.872927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.872937 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.872958 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.872970 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:41Z","lastTransitionTime":"2026-01-23T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.975194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.975231 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.975239 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.975255 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:41 crc kubenswrapper[4865]: I0123 11:53:41.975265 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:41Z","lastTransitionTime":"2026-01-23T11:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.078038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.078070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.078079 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.078094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.078103 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:42Z","lastTransitionTime":"2026-01-23T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.117507 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:42 crc kubenswrapper[4865]: E0123 11:53:42.117752 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.177509 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:37:48.844888224 +0000 UTC Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.180446 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.180477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.180487 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.180505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.180515 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:42Z","lastTransitionTime":"2026-01-23T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.283629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.283939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.284010 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.284084 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.284157 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:42Z","lastTransitionTime":"2026-01-23T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.387005 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.387297 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.387384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.387476 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.387562 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:42Z","lastTransitionTime":"2026-01-23T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.491650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.491906 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.491967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.492072 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.492132 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:42Z","lastTransitionTime":"2026-01-23T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.595565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.595632 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.595642 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.595661 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.595671 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:42Z","lastTransitionTime":"2026-01-23T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.698593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.698678 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.698690 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.698714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.698728 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:42Z","lastTransitionTime":"2026-01-23T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.802198 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.802542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.802683 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.802800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.802901 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:42Z","lastTransitionTime":"2026-01-23T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.905794 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.906137 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.906394 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.906611 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:42 crc kubenswrapper[4865]: I0123 11:53:42.906781 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:42Z","lastTransitionTime":"2026-01-23T11:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.009844 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.010086 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.010189 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.010258 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.010326 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:43Z","lastTransitionTime":"2026-01-23T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.113037 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.113320 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.113410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.113483 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.113548 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:43Z","lastTransitionTime":"2026-01-23T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.117428 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.117465 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.117586 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:43 crc kubenswrapper[4865]: E0123 11:53:43.117693 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:43 crc kubenswrapper[4865]: E0123 11:53:43.117838 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:43 crc kubenswrapper[4865]: E0123 11:53:43.117931 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.178004 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 08:52:07.242081108 +0000 UTC Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.216770 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.217061 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.217160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.217251 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.217356 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:43Z","lastTransitionTime":"2026-01-23T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.320828 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.320925 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.320939 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.320960 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.320971 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:43Z","lastTransitionTime":"2026-01-23T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.423161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.423439 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.423506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.423573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.423666 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:43Z","lastTransitionTime":"2026-01-23T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.526116 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.526301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.526387 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.526478 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.526542 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:43Z","lastTransitionTime":"2026-01-23T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.605873 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cb8rs_b3d06336-44ac-4c17-899b-28cbfe2ee64d/kube-multus/0.log" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.606072 4865 generic.go:334] "Generic (PLEG): container finished" podID="b3d06336-44ac-4c17-899b-28cbfe2ee64d" containerID="6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904" exitCode=1 Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.606226 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cb8rs" event={"ID":"b3d06336-44ac-4c17-899b-28cbfe2ee64d","Type":"ContainerDied","Data":"6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.607032 4865 scope.go:117] "RemoveContainer" containerID="6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.628787 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.628902 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.628973 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.629042 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.629110 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:43Z","lastTransitionTime":"2026-01-23T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.630784 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:42Z\\\",\\\"message\\\":\\\"2026-01-23T11:52:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a\\\\n2026-01-23T11:52:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a to /host/opt/cni/bin/\\\\n2026-01-23T11:52:57Z [verbose] multus-daemon started\\\\n2026-01-23T11:52:57Z [verbose] Readiness Indicator file check\\\\n2026-01-23T11:53:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.650125 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.668316 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.682474 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.699267 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.720720 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.732141 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.732196 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.732222 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.732242 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.732254 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:43Z","lastTransitionTime":"2026-01-23T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.740228 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.752123 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc4643c-4a19-4ead-b3e5-ef1b36053efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13b15729a5822849eeeb33338a7049e7899e43c958eb3ee6acb5fbe4f4bab8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e704eefb278d79695b1251a512db41c42cb4fe7f2b1a8a1a14ce8fea9b46b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.769478 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.782636 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.802131 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.819222 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.834514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.834794 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.834862 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.834928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.834985 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:43Z","lastTransitionTime":"2026-01-23T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.835543 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.858354 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:23Z\\\",\\\"message\\\":\\\"78b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 11:53:23.093765 6412 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 11:53:23.093738 6412 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 11:53:23.093861 6412 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.870778 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.883264 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.895224 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:43Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.943421 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.943463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.943475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.943493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:43 crc kubenswrapper[4865]: I0123 11:53:43.943505 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:43Z","lastTransitionTime":"2026-01-23T11:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.046070 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.046208 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.046292 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.046363 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.046427 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:44Z","lastTransitionTime":"2026-01-23T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.118173 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:44 crc kubenswrapper[4865]: E0123 11:53:44.118349 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.148390 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.148426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.148436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.148452 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.148464 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:44Z","lastTransitionTime":"2026-01-23T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.178855 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:53:24.899612971 +0000 UTC Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.287052 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.287283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.287368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.287466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.287530 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:44Z","lastTransitionTime":"2026-01-23T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.390404 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.390445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.390457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.390539 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.390554 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:44Z","lastTransitionTime":"2026-01-23T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.492171 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.492440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.492515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.492591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.492683 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:44Z","lastTransitionTime":"2026-01-23T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.595278 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.595326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.595335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.595350 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.595362 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:44Z","lastTransitionTime":"2026-01-23T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.610482 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cb8rs_b3d06336-44ac-4c17-899b-28cbfe2ee64d/kube-multus/0.log" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.610822 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cb8rs" event={"ID":"b3d06336-44ac-4c17-899b-28cbfe2ee64d","Type":"ContainerStarted","Data":"9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06"} Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.626962 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.645588 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.659925 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.674280 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.688056 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:42Z\\\",\\\"message\\\":\\\"2026-01-23T11:52:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a\\\\n2026-01-23T11:52:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a to /host/opt/cni/bin/\\\\n2026-01-23T11:52:57Z [verbose] multus-daemon started\\\\n2026-01-23T11:52:57Z [verbose] Readiness Indicator file check\\\\n2026-01-23T11:53:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.697317 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.697367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.697380 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.697400 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.697412 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:44Z","lastTransitionTime":"2026-01-23T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.703580 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.714189 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.725971 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.736147 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.746220 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.757955 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc4643c-4a19-4ead-b3e5-ef1b36053efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13b15729a5822849eeeb33338a7049e7899e43c958eb3ee6acb5fbe4f4bab8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e704eefb278d79695b1251a512db41c42cb4fe7f2b1a8a1a14ce8fea9b46b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.772874 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.793015 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:23Z\\\",\\\"message\\\":\\\"78b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 11:53:23.093765 6412 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 11:53:23.093738 6412 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 11:53:23.093861 6412 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.799827 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.799871 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.799909 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.799952 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.800045 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:44Z","lastTransitionTime":"2026-01-23T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.805608 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.819258 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.832381 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.847414 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:44Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.902493 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.902545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.902555 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.902575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:44 crc kubenswrapper[4865]: I0123 11:53:44.902630 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:44Z","lastTransitionTime":"2026-01-23T11:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.005008 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.005036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.005044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.005059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.005069 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:45Z","lastTransitionTime":"2026-01-23T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.106801 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.106844 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.106856 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.106876 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.106887 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:45Z","lastTransitionTime":"2026-01-23T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.117642 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:45 crc kubenswrapper[4865]: E0123 11:53:45.117754 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.117885 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:45 crc kubenswrapper[4865]: E0123 11:53:45.117934 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.118035 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:45 crc kubenswrapper[4865]: E0123 11:53:45.118088 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.179465 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:47:33.554445196 +0000 UTC Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.209942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.210001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.210025 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.210057 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.210078 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:45Z","lastTransitionTime":"2026-01-23T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.312784 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.312816 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.312824 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.312838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.312848 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:45Z","lastTransitionTime":"2026-01-23T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.415094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.415141 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.415150 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.415170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.415184 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:45Z","lastTransitionTime":"2026-01-23T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.517979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.518026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.518036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.518055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.518067 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:45Z","lastTransitionTime":"2026-01-23T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.620346 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.620405 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.620417 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.620434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.620447 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:45Z","lastTransitionTime":"2026-01-23T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.722621 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.722690 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.722706 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.722731 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.722753 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:45Z","lastTransitionTime":"2026-01-23T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.825870 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.825925 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.825942 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.825967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.825981 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:45Z","lastTransitionTime":"2026-01-23T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.928114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.928343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.928449 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.928528 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:45 crc kubenswrapper[4865]: I0123 11:53:45.928592 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:45Z","lastTransitionTime":"2026-01-23T11:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.031920 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.031965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.031976 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.031993 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.032006 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:46Z","lastTransitionTime":"2026-01-23T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.117566 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:46 crc kubenswrapper[4865]: E0123 11:53:46.117771 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.133551 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.136363 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.136434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.136450 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.136512 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.136960 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:46Z","lastTransitionTime":"2026-01-23T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.150583 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.174503 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:23Z\\\",\\\"message\\\":\\\"78b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 11:53:23.093765 6412 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 11:53:23.093738 6412 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 11:53:23.093861 6412 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.182785 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:33:39.714934812 +0000 UTC Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.191216 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.207213 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.223402 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.237858 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.240104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.240154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.240165 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.240185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.240196 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:46Z","lastTransitionTime":"2026-01-23T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.252024 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.268833 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.283001 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.295804 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.308051 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:42Z\\\",\\\"message\\\":\\\"2026-01-23T11:52:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a\\\\n2026-01-23T11:52:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a to /host/opt/cni/bin/\\\\n2026-01-23T11:52:57Z [verbose] multus-daemon started\\\\n2026-01-23T11:52:57Z [verbose] Readiness Indicator file check\\\\n2026-01-23T11:53:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.328844 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.342668 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.342694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.342703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.342718 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.342728 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:46Z","lastTransitionTime":"2026-01-23T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.344327 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc4643c-4a19-4ead-b3e5-ef1b36053efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13b15729a5822849eeeb33338a7049e7899e43c958eb3ee6acb5fbe4f4bab8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e704eefb278d79695b1251a512db41c42cb4fe7f2b1a8a1a14ce8fea9b46b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.356345 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.365347 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.376511 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:46Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.445266 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.445299 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.445309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.445327 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.445338 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:46Z","lastTransitionTime":"2026-01-23T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.547529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.547574 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.547584 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.547615 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.547626 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:46Z","lastTransitionTime":"2026-01-23T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.650553 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.650628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.650648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.650674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.650690 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:46Z","lastTransitionTime":"2026-01-23T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.754560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.754650 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.754672 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.754703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.754725 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:46Z","lastTransitionTime":"2026-01-23T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.857115 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.857172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.857185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.857204 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.857217 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:46Z","lastTransitionTime":"2026-01-23T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.959776 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.959838 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.959848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.959870 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:46 crc kubenswrapper[4865]: I0123 11:53:46.959885 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:46Z","lastTransitionTime":"2026-01-23T11:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.070321 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.070697 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.070803 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.070870 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.070942 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:47Z","lastTransitionTime":"2026-01-23T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.117731 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.117729 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:47 crc kubenswrapper[4865]: E0123 11:53:47.117901 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:47 crc kubenswrapper[4865]: E0123 11:53:47.117981 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.118297 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:47 crc kubenswrapper[4865]: E0123 11:53:47.118512 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.173459 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.173494 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.173503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.173520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.173530 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:47Z","lastTransitionTime":"2026-01-23T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.183154 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 05:37:08.569583278 +0000 UTC Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.278075 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.278250 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.278287 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.278322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.278349 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:47Z","lastTransitionTime":"2026-01-23T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.381325 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.381401 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.381424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.381453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.381475 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:47Z","lastTransitionTime":"2026-01-23T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.485448 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.485514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.485538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.485569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.485592 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:47Z","lastTransitionTime":"2026-01-23T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.588377 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.588436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.588449 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.588469 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.588486 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:47Z","lastTransitionTime":"2026-01-23T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.691653 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.691718 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.691743 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.691771 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.691786 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:47Z","lastTransitionTime":"2026-01-23T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.794434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.794516 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.794542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.794563 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.794574 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:47Z","lastTransitionTime":"2026-01-23T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.897290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.897354 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.897369 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.897391 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:47 crc kubenswrapper[4865]: I0123 11:53:47.897406 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:47Z","lastTransitionTime":"2026-01-23T11:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.001059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.001329 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.001477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.001591 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.001733 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:48Z","lastTransitionTime":"2026-01-23T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.104161 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.104473 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.104578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.104676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.104747 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:48Z","lastTransitionTime":"2026-01-23T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.117592 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:48 crc kubenswrapper[4865]: E0123 11:53:48.117842 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.184424 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:43:10.020291249 +0000 UTC Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.215725 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.215779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.215790 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.215808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.215819 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:48Z","lastTransitionTime":"2026-01-23T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.318638 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.318736 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.318753 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.318781 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.318800 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:48Z","lastTransitionTime":"2026-01-23T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.422229 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.422299 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.422312 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.422331 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.422343 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:48Z","lastTransitionTime":"2026-01-23T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.525729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.525795 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.525812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.525835 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.525856 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:48Z","lastTransitionTime":"2026-01-23T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.628418 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.628481 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.628498 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.628529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.628547 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:48Z","lastTransitionTime":"2026-01-23T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.731544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.731927 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.732159 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.732352 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.732561 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:48Z","lastTransitionTime":"2026-01-23T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.835666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.835704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.835735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.835751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.835761 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:48Z","lastTransitionTime":"2026-01-23T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.941132 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.941550 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.941743 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.941887 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:48 crc kubenswrapper[4865]: I0123 11:53:48.942028 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:48Z","lastTransitionTime":"2026-01-23T11:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.044750 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.044788 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.044797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.044814 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.044845 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.117336 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.117359 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:49 crc kubenswrapper[4865]: E0123 11:53:49.117481 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:49 crc kubenswrapper[4865]: E0123 11:53:49.117584 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.117782 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:49 crc kubenswrapper[4865]: E0123 11:53:49.118005 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.148705 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.149064 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.149322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.149502 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.149701 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.184866 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:06:04.837104552 +0000 UTC Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.234277 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.234414 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.234511 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.234622 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.234716 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: E0123 11:53:49.250721 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:49Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.256803 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.256861 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.256876 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.256898 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.256914 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: E0123 11:53:49.276301 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:49Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.281162 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.281234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.281253 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.281284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.281303 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: E0123 11:53:49.345091 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:49Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.349782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.349829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.349840 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.349858 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.349868 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: E0123 11:53:49.363732 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:49Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.367523 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.367548 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.367556 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.367568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.367579 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: E0123 11:53:49.381457 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:49Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:49 crc kubenswrapper[4865]: E0123 11:53:49.381623 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.383080 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.383127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.383156 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.383175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.383190 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.486038 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.486099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.486113 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.486135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.486240 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.589001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.589056 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.589071 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.589096 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.589116 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.692664 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.692708 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.692717 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.692734 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.692744 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.795076 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.795111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.795120 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.795135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.795144 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.898679 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.898720 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.898730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.898747 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:49 crc kubenswrapper[4865]: I0123 11:53:49.898757 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:49Z","lastTransitionTime":"2026-01-23T11:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.001361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.001409 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.001446 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.001466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.001479 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:50Z","lastTransitionTime":"2026-01-23T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.105368 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.105415 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.105430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.105451 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.105464 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:50Z","lastTransitionTime":"2026-01-23T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.117753 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:50 crc kubenswrapper[4865]: E0123 11:53:50.117877 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.186417 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:22:04.928765243 +0000 UTC Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.209089 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.209133 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.209149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.209175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.209197 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:50Z","lastTransitionTime":"2026-01-23T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.311764 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.311813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.311830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.311855 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.311876 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:50Z","lastTransitionTime":"2026-01-23T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.415332 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.415388 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.415410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.415441 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.415463 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:50Z","lastTransitionTime":"2026-01-23T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.520677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.520762 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.520780 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.520811 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.520833 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:50Z","lastTransitionTime":"2026-01-23T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.624400 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.624457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.624477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.624505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.624525 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:50Z","lastTransitionTime":"2026-01-23T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.729439 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.729538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.729557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.729584 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.729629 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:50Z","lastTransitionTime":"2026-01-23T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.833852 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.833972 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.834000 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.834035 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.834060 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:50Z","lastTransitionTime":"2026-01-23T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.937742 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.937800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.937813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.937831 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:50 crc kubenswrapper[4865]: I0123 11:53:50.937844 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:50Z","lastTransitionTime":"2026-01-23T11:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.040050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.040093 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.040103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.040121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.040135 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:51Z","lastTransitionTime":"2026-01-23T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.117307 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.117375 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:51 crc kubenswrapper[4865]: E0123 11:53:51.117472 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.117317 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:51 crc kubenswrapper[4865]: E0123 11:53:51.118155 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:51 crc kubenswrapper[4865]: E0123 11:53:51.118471 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.118908 4865 scope.go:117] "RemoveContainer" containerID="146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.147029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.147079 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.147091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.147111 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.147128 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:51Z","lastTransitionTime":"2026-01-23T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.186589 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 16:20:12.058613403 +0000 UTC Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.250263 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.250319 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.250340 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.250366 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.250387 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:51Z","lastTransitionTime":"2026-01-23T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.353779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.353872 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.353908 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.353940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.353963 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:51Z","lastTransitionTime":"2026-01-23T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.457503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.458073 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.458104 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.458180 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.458212 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:51Z","lastTransitionTime":"2026-01-23T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.561977 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.562028 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.562041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.562063 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.562078 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:51Z","lastTransitionTime":"2026-01-23T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.638544 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/2.log" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.642127 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.642761 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.662330 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.663928 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.663956 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.663966 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.663984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.663996 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:51Z","lastTransitionTime":"2026-01-23T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.686280 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:23Z\\\",\\\"message\\\":\\\"78b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 11:53:23.093765 6412 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 11:53:23.093738 6412 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 11:53:23.093861 6412 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.706273 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.775063 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.775099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.775110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.775127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.775141 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:51Z","lastTransitionTime":"2026-01-23T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.777644 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.810860 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.837925 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.850811 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.863899 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.875236 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.877623 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.877660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.877673 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.877693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.877708 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:51Z","lastTransitionTime":"2026-01-23T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.889001 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.901583 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:42Z\\\",\\\"message\\\":\\\"2026-01-23T11:52:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a\\\\n2026-01-23T11:52:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a to /host/opt/cni/bin/\\\\n2026-01-23T11:52:57Z [verbose] multus-daemon started\\\\n2026-01-23T11:52:57Z [verbose] Readiness Indicator file check\\\\n2026-01-23T11:53:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.915658 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.931251 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.943367 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.958368 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.972661 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.979205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.979235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.979243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.979255 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.979264 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:51Z","lastTransitionTime":"2026-01-23T11:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:51 crc kubenswrapper[4865]: I0123 11:53:51.990484 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc4643c-4a19-4ead-b3e5-ef1b36053efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13b15729a5822849eeeb33338a7049e7899e43c958eb3ee6acb5fbe4f4bab8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e704eefb278d79695b1251a512db41c42cb4fe7f2b1a8a1a14ce8fea9b46b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:51Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.080988 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.081022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.081030 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.081044 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.081053 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:52Z","lastTransitionTime":"2026-01-23T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.117550 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:52 crc kubenswrapper[4865]: E0123 11:53:52.117738 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.183465 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.183514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.183528 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.183544 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.183556 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:52Z","lastTransitionTime":"2026-01-23T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.186909 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 06:35:36.230466749 +0000 UTC Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.286126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.286163 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.286172 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.286186 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.286197 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:52Z","lastTransitionTime":"2026-01-23T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.388360 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.388398 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.388408 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.388421 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.388434 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:52Z","lastTransitionTime":"2026-01-23T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.490736 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.490773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.490785 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.490808 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.490819 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:52Z","lastTransitionTime":"2026-01-23T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.592881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.592916 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.592925 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.592940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.592950 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:52Z","lastTransitionTime":"2026-01-23T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.646830 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/3.log" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.647525 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/2.log" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.650308 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" exitCode=1 Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.650354 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665"} Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.650404 4865 scope.go:117] "RemoveContainer" containerID="146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.651413 4865 scope.go:117] "RemoveContainer" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 11:53:52 crc kubenswrapper[4865]: E0123 11:53:52.651716 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.664271 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.695499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.695536 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.695546 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.695564 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.695575 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:52Z","lastTransitionTime":"2026-01-23T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.713344 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://146578fa0212ee63443e453e3d543ed4cae029695644b8640eee84f07be9c85b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:23Z\\\",\\\"message\\\":\\\"78b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0123 11:53:23.093765 6412 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 11:53:23.093738 6412 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-ingress-canary/ingress-canary]} name:Service_openshift-ingress-canary/ingress-canary_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.34:8443: 10.217.5.34:8888:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7715118b-bb1b-400a-803e-7ab2cc3eeec0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 11:53:23.093861 6412 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:52Z\\\",\\\"message\\\":\\\"ndler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 11:53:52.134540 6781 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 11:53:52.132538 6781 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 11:53:52.135389 6781 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 11:53:52.135477 6781 factory.go:656] Stopping watch factory\\\\nI0123 11:53:52.135501 6781 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 11:53:52.135658 6781 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 11:53:52.157329 6781 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 11:53:52.157387 6781 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 11:53:52.157497 6781 ovnkube.go:599] Stopped ovnkube\\\\nI0123 11:53:52.157554 6781 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 11:53:52.157750 6781 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.726190 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.738283 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.749506 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.762520 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.776312 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.790625 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.796954 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.796989 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.796998 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.797012 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.797022 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:52Z","lastTransitionTime":"2026-01-23T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.807542 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.822393 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.834638 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:42Z\\\",\\\"message\\\":\\\"2026-01-23T11:52:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a\\\\n2026-01-23T11:52:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a to /host/opt/cni/bin/\\\\n2026-01-23T11:52:57Z [verbose] multus-daemon started\\\\n2026-01-23T11:52:57Z [verbose] Readiness Indicator file check\\\\n2026-01-23T11:53:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.849309 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.862399 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.874690 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.884482 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.895873 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.899680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.899767 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.899779 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.899798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.899810 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:52Z","lastTransitionTime":"2026-01-23T11:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:52 crc kubenswrapper[4865]: I0123 11:53:52.907569 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc4643c-4a19-4ead-b3e5-ef1b36053efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13b15729a5822849eeeb33338a7049e7899e43c958eb3ee6acb5fbe4f4bab8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e704eefb278d79695b1251a512db41c42cb4fe7f2b1a8a1a14ce8fea9b46b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:52Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.002250 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.002302 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.002314 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.002326 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.002336 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:53Z","lastTransitionTime":"2026-01-23T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.106043 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.107053 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.107227 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.107475 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.107718 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:53Z","lastTransitionTime":"2026-01-23T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.117798 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:53 crc kubenswrapper[4865]: E0123 11:53:53.117940 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.117798 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.118095 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:53 crc kubenswrapper[4865]: E0123 11:53:53.118356 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:53 crc kubenswrapper[4865]: E0123 11:53:53.118723 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.189785 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 17:10:32.490696269 +0000 UTC Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.210214 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.210260 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.210273 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.210293 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.210308 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:53Z","lastTransitionTime":"2026-01-23T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.313294 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.313343 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.313352 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.313370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.313380 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:53Z","lastTransitionTime":"2026-01-23T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.416463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.416520 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.416533 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.416553 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.416570 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:53Z","lastTransitionTime":"2026-01-23T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.519885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.520006 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.520023 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.520049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.520063 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:53Z","lastTransitionTime":"2026-01-23T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.623535 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.623625 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.623643 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.623685 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.623703 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:53Z","lastTransitionTime":"2026-01-23T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.657294 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/3.log" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.663007 4865 scope.go:117] "RemoveContainer" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 11:53:53 crc kubenswrapper[4865]: E0123 11:53:53.663228 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.678292 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc4643c-4a19-4ead-b3e5-ef1b36053efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13b15729a5822849eeeb33338a7049e7899e43c958eb3ee6acb5fbe4f4bab8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e704eefb278d79695b1251a512db41c42cb4fe7f2b1a8a1a14ce8fea9b46b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.695071 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.724210 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.726277 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.726313 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.726327 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.726349 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.726366 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:53Z","lastTransitionTime":"2026-01-23T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.740105 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.758765 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.775396 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.796291 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:52Z\\\",\\\"message\\\":\\\"ndler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 11:53:52.134540 6781 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 11:53:52.132538 6781 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 11:53:52.135389 6781 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 11:53:52.135477 6781 factory.go:656] Stopping watch factory\\\\nI0123 11:53:52.135501 6781 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 11:53:52.135658 6781 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 11:53:52.157329 6781 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 11:53:52.157387 6781 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 11:53:52.157497 6781 ovnkube.go:599] Stopped ovnkube\\\\nI0123 11:53:52.157554 6781 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 11:53:52.157750 6781 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.809378 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.823972 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.828688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.828733 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.828750 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.828773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.828790 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:53Z","lastTransitionTime":"2026-01-23T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.839048 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.853285 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:42Z\\\",\\\"message\\\":\\\"2026-01-23T11:52:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a\\\\n2026-01-23T11:52:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a to /host/opt/cni/bin/\\\\n2026-01-23T11:52:57Z [verbose] multus-daemon started\\\\n2026-01-23T11:52:57Z [verbose] Readiness Indicator file check\\\\n2026-01-23T11:53:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.870913 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.887753 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.910237 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.927005 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.931102 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.931170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.931185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.931209 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.931224 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:53Z","lastTransitionTime":"2026-01-23T11:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.941659 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:53 crc kubenswrapper[4865]: I0123 11:53:53.957623 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:53Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.034192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.034223 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.034234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.034276 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.034287 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:54Z","lastTransitionTime":"2026-01-23T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.117509 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:54 crc kubenswrapper[4865]: E0123 11:53:54.117663 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.137049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.137094 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.137106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.137122 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.137135 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:54Z","lastTransitionTime":"2026-01-23T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.190449 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:55:42.503681811 +0000 UTC Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.240383 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.240509 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.240531 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.240565 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.240588 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:54Z","lastTransitionTime":"2026-01-23T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.344032 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.344073 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.344085 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.344106 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.344120 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:54Z","lastTransitionTime":"2026-01-23T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.447525 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.447676 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.447694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.447712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.447724 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:54Z","lastTransitionTime":"2026-01-23T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.550338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.550383 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.550396 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.550415 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.550427 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:54Z","lastTransitionTime":"2026-01-23T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.652397 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.652434 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.652447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.652465 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.652476 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:54Z","lastTransitionTime":"2026-01-23T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.755058 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.755120 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.755131 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.755148 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.755160 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:54Z","lastTransitionTime":"2026-01-23T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.857514 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.857557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.857568 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.857583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.857594 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:54Z","lastTransitionTime":"2026-01-23T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.960569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.960613 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.960622 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.960636 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:54 crc kubenswrapper[4865]: I0123 11:53:54.960645 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:54Z","lastTransitionTime":"2026-01-23T11:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.064970 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.065036 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.065058 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.065089 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.065109 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:55Z","lastTransitionTime":"2026-01-23T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.118004 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.118077 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.118077 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:55 crc kubenswrapper[4865]: E0123 11:53:55.118244 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:55 crc kubenswrapper[4865]: E0123 11:53:55.118292 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:55 crc kubenswrapper[4865]: E0123 11:53:55.118556 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.141643 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.168413 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.168454 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.168463 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.168477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.168487 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:55Z","lastTransitionTime":"2026-01-23T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.191154 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:48:17.180645629 +0000 UTC Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.271215 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.271287 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.271305 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.271335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.271354 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:55Z","lastTransitionTime":"2026-01-23T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.375050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.375145 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.375175 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.375205 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.375231 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:55Z","lastTransitionTime":"2026-01-23T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.478694 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.478788 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.478805 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.478830 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.478844 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:55Z","lastTransitionTime":"2026-01-23T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.581505 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.581549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.581557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.581586 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.581612 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:55Z","lastTransitionTime":"2026-01-23T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.686388 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.686446 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.686461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.686486 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.686501 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:55Z","lastTransitionTime":"2026-01-23T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.789798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.790230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.790243 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.790262 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.790275 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:55Z","lastTransitionTime":"2026-01-23T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.892170 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.892292 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.892301 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.892315 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.892325 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:55Z","lastTransitionTime":"2026-01-23T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.995548 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.995631 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.995649 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.995695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:55 crc kubenswrapper[4865]: I0123 11:53:55.995716 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:55Z","lastTransitionTime":"2026-01-23T11:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.099337 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.099411 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.099428 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.099455 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.099475 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:56Z","lastTransitionTime":"2026-01-23T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.117081 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:56 crc kubenswrapper[4865]: E0123 11:53:56.117261 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.164304 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ea3549b-3898-4d82-8240-2e062b4a6046\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:52Z\\\",\\\"message\\\":\\\"ndler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 11:53:52.134540 6781 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 11:53:52.132538 6781 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 11:53:52.135389 6781 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 11:53:52.135477 6781 factory.go:656] Stopping watch factory\\\\nI0123 11:53:52.135501 6781 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 11:53:52.135658 6781 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 11:53:52.157329 6781 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0123 11:53:52.157387 6781 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0123 11:53:52.157497 6781 ovnkube.go:599] Stopped ovnkube\\\\nI0123 11:53:52.157554 6781 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0123 11:53:52.157750 6781 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wl88l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-68shs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.180163 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wrntt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e0c4178d-fd12-43de-a232-00b6b7ed5866\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39586fa76f2dbb4872982cb5db8d13f48d6919e0e550410a4f6dec777d57f1d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7nkjv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wrntt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.192150 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 06:16:24.013454317 +0000 UTC Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.199576 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bd2bc34-d218-45e1-b168-a304fab36d86\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e785b727814db7772c31ac59b2d07d02f9c0333b4391d0baa2978dafeae4b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5728dd12d1f06d71673d78f53402650a5c0d2e4153aef78d31a9c5b74458c3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ppkvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-54mz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.201858 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.201965 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.202061 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.202126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.202201 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:56Z","lastTransitionTime":"2026-01-23T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.221032 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-n76rp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a15fb93f-eb63-4a8c-bec6-20bed7300dca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fprs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-n76rp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.236777 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"message\\\":\\\"shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0123 11:52:54.348799 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0123 11:52:54.348816 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0123 11:52:54.348951 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0123 11:52:54.348969 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0123 11:52:54.349040 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769169158\\\\\\\\\\\\\\\" (2026-01-23 11:52:37 +0000 UTC to 2026-02-22 11:52:38 +0000 UTC (now=2026-01-23 11:52:54.348995891 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349236 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769169168\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769169168\\\\\\\\\\\\\\\" (2026-01-23 10:52:48 +0000 UTC to 2027-01-23 10:52:48 +0000 UTC (now=2026-01-23 11:52:54.349208586 +0000 UTC))\\\\\\\"\\\\nI0123 11:52:54.349263 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0123 11:52:54.349293 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0123 11:52:54.349323 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3566712873/tls.crt::/tmp/serving-cert-3566712873/tls.key\\\\\\\"\\\\nI0123 11:52:54.349442 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nF0123 11:52:54.353408 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.247764 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d448bd944d171f8b3b71621d304795b3601e7a1565e741997df73cbf4156755\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.260907 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.274279 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.288369 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34881537fb6fbcd8898674c855b78803957cfcfebcacb7e492fa3701a63fca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.302048 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.304548 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.304660 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.304735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.304800 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.304875 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:56Z","lastTransitionTime":"2026-01-23T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.317937 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cb8rs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3d06336-44ac-4c17-899b-28cbfe2ee64d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T11:53:42Z\\\",\\\"message\\\":\\\"2026-01-23T11:52:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a\\\\n2026-01-23T11:52:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1234f1c2-5a88-4ebd-868c-492efea3cc6a to /host/opt/cni/bin/\\\\n2026-01-23T11:52:57Z [verbose] multus-daemon started\\\\n2026-01-23T11:52:57Z [verbose] Readiness Indicator file check\\\\n2026-01-23T11:53:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w9brx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cb8rs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.334032 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qwf88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b5ee7d8-e4b9-4df1-96b9-a922e6d801e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5952f2f911152b2bf18959653fc360403ff8dccc29ff67537f284dd3eef1c093\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:53:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://594ad77dc20347ffd526d380d5ea6e9deffe0df95d895ded2152a02586cfff32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5932cd56a3dd471ed4af86cecd3a2e120cafb2b4b2f80abcac883da9224fd86c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55687cc9ab1f8d6bd0d259ad99a43463bc9416a6791185dd4f2e287da74bc816\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dc664697e3d2416837f08ee7a51b5b73178fc69673f614efe86d0d102ed757\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9b4d0b13e122ccb55f4413e6b0d63863f2d05c39d5abced4839018e25ba8377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24ba58fdc7d58c28d75b82b4927045d75138374b17d9f7610e35a0b8b22dedca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:53:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:53:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bn5g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qwf88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.346757 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f881c9ad-7b34-41fa-a3ea-966f386e08b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f3531cb150781e24b62ea35cd2812e1488a2e44eea10956f28359d5197cfb21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://170af1052f17146a6703e301146560d54af41af75d13a1cb0d6c47ea3cac1c28\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9025440e44f3f75e02509102b057c44aa11d1b7b8b3cc893582b38f8125b0b41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.370781 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7488ec41-11e9-4d48-acb1-a91b405cf325\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6826d9e6f25e2df12839d21bb8ad3506f2f7fe5a358cf47f3a0aa61b1533da82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://32d22e7eb1f69f58afd7cc8366caece7385ce349a9f6838dac8255ecbca769f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52542b62e37525fa85644a95293f6ced96343128fc98b18da7853ace1ddb8881\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9414c4b34eff1c122a2bbc7d142b009f5a0bd6237032f73084beba1625113d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b340619fcf055ece3f70923424892ce7912d47d3a328cbf387b747415f01511\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13f5fe979055d878de5615c1bdac06f3dbb4a14a5ad02ccd769e0acce65d28f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f5fe979055d878de5615c1bdac06f3dbb4a14a5ad02ccd769e0acce65d28f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b72cdbeb350645b31d20c88b564042f00703f31595af579dc3496d4213af357a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b72cdbeb350645b31d20c88b564042f00703f31595af579dc3496d4213af357a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://81ae1738684a8eabc5c38f86032bcbcd58f36290d431877e861e611bcf0a1116\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81ae1738684a8eabc5c38f86032bcbcd58f36290d431877e861e611bcf0a1116\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.384666 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab7ab4017f807c106b26c0eb2e50ffa26f13a9086a7e8a2e974ab7972fe0934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.398369 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l5tpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eedf452a-daa6-4d9c-94ed-ca47edac4448\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e573e60d7163b95d43cfbf026bfa948ed0f4a6f923b8c90faef2660fcacf967b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fzj6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l5tpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.407233 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.407282 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.407296 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.407318 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.407333 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:56Z","lastTransitionTime":"2026-01-23T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.410536 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a3e02e9cb07d8a64336872757e8a85a12a376de87a3f3abaf68ca970a12835\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfmcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sgp5m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.424025 4865 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cfc4643c-4a19-4ead-b3e5-ef1b36053efa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13b15729a5822849eeeb33338a7049e7899e43c958eb3ee6acb5fbe4f4bab8ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e704eefb278d79695b1251a512db41c42cb4fe7f2b1a8a1a14ce8fea9b46b94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T11:52:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e2fbf085c2c45616dd8a238ffe356ec383402e6a49cb6f8b21711af777ba494\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T11:52:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T11:52:36Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T11:52:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:56Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.510633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.510695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.510712 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.510737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.510757 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:56Z","lastTransitionTime":"2026-01-23T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.613625 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.613670 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.613686 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.613703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.613715 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:56Z","lastTransitionTime":"2026-01-23T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.716681 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.716755 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.716773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.716807 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.716837 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:56Z","lastTransitionTime":"2026-01-23T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.820704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.820825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.820845 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.820869 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.820887 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:56Z","lastTransitionTime":"2026-01-23T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.931123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.931215 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.931241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.931279 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:56 crc kubenswrapper[4865]: I0123 11:53:56.931302 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:56Z","lastTransitionTime":"2026-01-23T11:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.034344 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.034388 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.034397 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.034410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.034419 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:57Z","lastTransitionTime":"2026-01-23T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.117724 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.117863 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.118232 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:57 crc kubenswrapper[4865]: E0123 11:53:57.118356 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:57 crc kubenswrapper[4865]: E0123 11:53:57.118519 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:57 crc kubenswrapper[4865]: E0123 11:53:57.118693 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.137894 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.137941 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.137953 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.137971 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.137984 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:57Z","lastTransitionTime":"2026-01-23T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.192453 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:10:29.910054413 +0000 UTC Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.240352 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.240384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.240444 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.240464 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.240488 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:57Z","lastTransitionTime":"2026-01-23T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.343091 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.343118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.343126 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.343138 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.343147 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:57Z","lastTransitionTime":"2026-01-23T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.445278 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.445320 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.445330 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.445348 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.445364 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:57Z","lastTransitionTime":"2026-01-23T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.551492 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.551569 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.551586 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.551648 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.551662 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:57Z","lastTransitionTime":"2026-01-23T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.654549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.654644 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.654665 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.654696 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.654720 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:57Z","lastTransitionTime":"2026-01-23T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.757416 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.757453 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.757461 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.757474 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.757483 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:57Z","lastTransitionTime":"2026-01-23T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.859479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.859557 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.859587 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.859659 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.859682 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:57Z","lastTransitionTime":"2026-01-23T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.962659 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.962702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.962717 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.962738 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:57 crc kubenswrapper[4865]: I0123 11:53:57.962753 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:57Z","lastTransitionTime":"2026-01-23T11:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.065652 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.065702 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.065715 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.065732 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.065744 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:58Z","lastTransitionTime":"2026-01-23T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.118120 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:53:58 crc kubenswrapper[4865]: E0123 11:53:58.118314 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.167506 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.167549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.167560 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.167575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.167585 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:58Z","lastTransitionTime":"2026-01-23T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.192923 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:59:17.277699829 +0000 UTC Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.269926 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.269979 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.269994 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.270015 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.270032 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:58Z","lastTransitionTime":"2026-01-23T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.372627 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.372667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.372678 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.372693 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.372704 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:58Z","lastTransitionTime":"2026-01-23T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.475081 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.475127 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.475139 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.475160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.475203 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:58Z","lastTransitionTime":"2026-01-23T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.577706 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.577744 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.577754 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.577769 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.577779 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:58Z","lastTransitionTime":"2026-01-23T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.680344 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.680384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.680396 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.680410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.680420 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:58Z","lastTransitionTime":"2026-01-23T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.782344 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.782396 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.782408 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.782425 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.782438 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:58Z","lastTransitionTime":"2026-01-23T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.884714 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.884786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.884813 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.884837 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.884854 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:58Z","lastTransitionTime":"2026-01-23T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.966959 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:53:58 crc kubenswrapper[4865]: E0123 11:53:58.967168 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:02.967140652 +0000 UTC m=+147.136212888 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.987361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.987426 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.987445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.987469 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:58 crc kubenswrapper[4865]: I0123 11:53:58.987504 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:58Z","lastTransitionTime":"2026-01-23T11:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.068385 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.068419 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.068438 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.068460 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068561 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068575 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068585 4865 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068649 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.06863623 +0000 UTC m=+147.237708456 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068672 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068720 4865 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068730 4865 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068740 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.068734512 +0000 UTC m=+147.237806738 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068753 4865 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068842 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.068814924 +0000 UTC m=+147.237887190 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068686 4865 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.068914 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.068899586 +0000 UTC m=+147.237971842 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.090121 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.090144 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.090152 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.090163 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.090171 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.117017 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.117144 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.117262 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.117286 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.117407 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.117732 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.193029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.193423 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.193442 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.193070 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:56:47.165915849 +0000 UTC Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.193466 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.193551 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.296894 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.296938 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.296954 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.296975 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.296992 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.400752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.400815 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.400832 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.400857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.400874 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.500704 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.500759 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.500778 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.500803 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.500820 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.523066 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.529629 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.529670 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.529679 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.529695 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.529705 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.551971 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.556296 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.556403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.556422 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.556447 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.556466 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.575980 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.580323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.580410 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.580438 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.580470 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.580497 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.596807 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.600990 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.601244 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.601487 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.601798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.602091 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.621677 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148052Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608852Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T11:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fc8e73b9-5731-4055-8f0b-defdec7b14e0\\\",\\\"systemUUID\\\":\\\"bb0a19dc-3efc-4874-8a0b-6a80f91a629b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T11:53:59Z is after 2025-08-24T17:21:41Z" Jan 23 11:53:59 crc kubenswrapper[4865]: E0123 11:53:59.621810 4865 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.624050 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.624103 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.624120 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.624148 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.624165 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.727032 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.727391 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.727542 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.727729 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.727888 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.831185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.831247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.831256 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.831271 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.831280 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.934006 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.934046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.934055 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.934068 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:53:59 crc kubenswrapper[4865]: I0123 11:53:59.934077 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:53:59Z","lastTransitionTime":"2026-01-23T11:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.036641 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.036854 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.036968 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.037041 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.037100 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:00Z","lastTransitionTime":"2026-01-23T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.117828 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:00 crc kubenswrapper[4865]: E0123 11:54:00.118246 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.140082 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.140124 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.140136 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.140153 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.140164 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:00Z","lastTransitionTime":"2026-01-23T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.194219 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:50:48.93117509 +0000 UTC Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.242692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.242761 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.242780 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.242802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.242819 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:00Z","lastTransitionTime":"2026-01-23T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.345831 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.345897 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.345914 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.345937 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.345959 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:00Z","lastTransitionTime":"2026-01-23T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.448984 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.449029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.449042 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.449059 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.449070 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:00Z","lastTransitionTime":"2026-01-23T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.551462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.551517 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.551530 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.551549 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.551565 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:00Z","lastTransitionTime":"2026-01-23T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.654110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.654154 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.654166 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.654224 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.654265 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:00Z","lastTransitionTime":"2026-01-23T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.756246 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.756282 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.756290 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.756304 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.756313 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:00Z","lastTransitionTime":"2026-01-23T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.858750 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.858798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.858812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.858829 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.858843 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:00Z","lastTransitionTime":"2026-01-23T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.961437 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.961489 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.961503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.961522 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:00 crc kubenswrapper[4865]: I0123 11:54:00.961538 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:00Z","lastTransitionTime":"2026-01-23T11:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.063751 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.063796 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.063809 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.063826 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.063840 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:01Z","lastTransitionTime":"2026-01-23T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.117775 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.117869 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.117918 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:01 crc kubenswrapper[4865]: E0123 11:54:01.118056 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:01 crc kubenswrapper[4865]: E0123 11:54:01.118254 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:01 crc kubenswrapper[4865]: E0123 11:54:01.118369 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.166545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.166652 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.166697 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.166721 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.166735 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:01Z","lastTransitionTime":"2026-01-23T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.195356 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:45:38.720369336 +0000 UTC Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.268873 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.268907 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.268919 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.268935 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.268947 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:01Z","lastTransitionTime":"2026-01-23T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.370617 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.370646 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.370655 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.370667 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.370677 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:01Z","lastTransitionTime":"2026-01-23T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.473284 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.473314 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.473323 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.473335 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.473344 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:01Z","lastTransitionTime":"2026-01-23T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.575768 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.575806 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.575820 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.575836 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.575848 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:01Z","lastTransitionTime":"2026-01-23T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.678782 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.678833 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.678848 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.678867 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.678884 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:01Z","lastTransitionTime":"2026-01-23T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.781366 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.781403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.781412 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.781427 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.781439 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:01Z","lastTransitionTime":"2026-01-23T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.884445 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.884489 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.884507 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.884529 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.884546 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:01Z","lastTransitionTime":"2026-01-23T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.986991 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.987046 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.987064 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.987089 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:01 crc kubenswrapper[4865]: I0123 11:54:01.987108 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:01Z","lastTransitionTime":"2026-01-23T11:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.090131 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.090176 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.090188 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.090206 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.090219 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:02Z","lastTransitionTime":"2026-01-23T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.117925 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:02 crc kubenswrapper[4865]: E0123 11:54:02.118187 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.192943 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.192990 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.193002 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.193021 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.193035 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:02Z","lastTransitionTime":"2026-01-23T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.196137 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:46:43.987547046 +0000 UTC Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.295374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.295424 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.295436 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.295454 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.295467 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:02Z","lastTransitionTime":"2026-01-23T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.397957 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.397995 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.398005 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.398022 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.398033 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:02Z","lastTransitionTime":"2026-01-23T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.500734 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.500773 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.500786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.500802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.500814 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:02Z","lastTransitionTime":"2026-01-23T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.604786 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.604832 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.604857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.604876 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.604888 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:02Z","lastTransitionTime":"2026-01-23T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.707583 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.707655 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.707671 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.707692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.707706 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:02Z","lastTransitionTime":"2026-01-23T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.809777 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.809862 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.809908 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.809933 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.809949 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:02Z","lastTransitionTime":"2026-01-23T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.912297 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.912351 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.912367 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.912384 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:02 crc kubenswrapper[4865]: I0123 11:54:02.912399 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:02Z","lastTransitionTime":"2026-01-23T11:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.014802 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.014851 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.014863 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.014879 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.014890 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:03Z","lastTransitionTime":"2026-01-23T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.117000 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.117071 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.117098 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:03 crc kubenswrapper[4865]: E0123 11:54:03.117231 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:03 crc kubenswrapper[4865]: E0123 11:54:03.117324 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:03 crc kubenswrapper[4865]: E0123 11:54:03.117391 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.117537 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.117632 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.117643 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.117658 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.117669 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:03Z","lastTransitionTime":"2026-01-23T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.197304 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:40:35.380466815 +0000 UTC Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.219645 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.219700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.219710 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.219730 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.219742 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:03Z","lastTransitionTime":"2026-01-23T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.322452 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.322503 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.322515 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.322534 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.322545 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:03Z","lastTransitionTime":"2026-01-23T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.425200 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.425268 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.425283 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.425303 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.425316 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:03Z","lastTransitionTime":"2026-01-23T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.529464 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.529538 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.529567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.529633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.529666 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:03Z","lastTransitionTime":"2026-01-23T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.633214 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.633273 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.633285 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.633307 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.633321 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:03Z","lastTransitionTime":"2026-01-23T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.736458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.736509 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.736530 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.736551 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.736563 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:03Z","lastTransitionTime":"2026-01-23T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.838867 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.838912 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.838922 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.838936 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.838947 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:03Z","lastTransitionTime":"2026-01-23T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.941576 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.941636 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.941671 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.941687 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:03 crc kubenswrapper[4865]: I0123 11:54:03.941700 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:03Z","lastTransitionTime":"2026-01-23T11:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.045548 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.045677 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.045710 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.045748 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.045773 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:04Z","lastTransitionTime":"2026-01-23T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.117684 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:04 crc kubenswrapper[4865]: E0123 11:54:04.117897 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.148400 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.148477 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.148501 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.148534 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.148556 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:04Z","lastTransitionTime":"2026-01-23T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.197477 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:07:30.770403702 +0000 UTC Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.252092 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.252173 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.252194 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.252219 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.252237 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:04Z","lastTransitionTime":"2026-01-23T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.354526 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.354571 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.354582 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.354624 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.354636 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:04Z","lastTransitionTime":"2026-01-23T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.456850 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.456885 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.456893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.456905 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.456914 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:04Z","lastTransitionTime":"2026-01-23T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.559633 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.559692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.559705 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.559726 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.559741 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:04Z","lastTransitionTime":"2026-01-23T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.662585 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.662638 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.662647 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.662666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.662676 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:04Z","lastTransitionTime":"2026-01-23T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.765810 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.765868 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.765892 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.765923 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.765944 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:04Z","lastTransitionTime":"2026-01-23T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.868336 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.868393 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.868406 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.868430 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.868444 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:04Z","lastTransitionTime":"2026-01-23T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.971381 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.971439 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.971458 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.971488 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:04 crc kubenswrapper[4865]: I0123 11:54:04.971510 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:04Z","lastTransitionTime":"2026-01-23T11:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.075142 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.075214 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.075241 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.075275 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.075303 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:05Z","lastTransitionTime":"2026-01-23T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.117751 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.117792 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:05 crc kubenswrapper[4865]: E0123 11:54:05.117927 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:05 crc kubenswrapper[4865]: E0123 11:54:05.118092 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.118219 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:05 crc kubenswrapper[4865]: E0123 11:54:05.118301 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.178109 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.178197 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.178212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.178238 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.178252 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:05Z","lastTransitionTime":"2026-01-23T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.198566 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 12:40:44.427216975 +0000 UTC Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.281798 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.281846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.281856 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.281875 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.281886 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:05Z","lastTransitionTime":"2026-01-23T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.386123 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.386192 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.386212 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.386234 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.386247 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:05Z","lastTransitionTime":"2026-01-23T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.488752 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.488840 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.488857 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.488879 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.488895 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:05Z","lastTransitionTime":"2026-01-23T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.592567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.592674 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.592692 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.592766 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.592821 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:05Z","lastTransitionTime":"2026-01-23T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.695135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.695235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.695252 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.695275 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.695290 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:05Z","lastTransitionTime":"2026-01-23T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.799045 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.799118 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.799135 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.799164 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.799186 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:05Z","lastTransitionTime":"2026-01-23T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.901625 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.901700 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.901715 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.901735 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:05 crc kubenswrapper[4865]: I0123 11:54:05.901749 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:05Z","lastTransitionTime":"2026-01-23T11:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.005523 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.005575 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.005588 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.005628 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.005642 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:06Z","lastTransitionTime":"2026-01-23T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.109160 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.109225 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.109249 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.109280 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.109302 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:06Z","lastTransitionTime":"2026-01-23T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.117455 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:06 crc kubenswrapper[4865]: E0123 11:54:06.117765 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.199403 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:02:33.273151626 +0000 UTC Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.200124 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-cb8rs" podStartSLOduration=71.200093998 podStartE2EDuration="1m11.200093998s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:06.173194765 +0000 UTC m=+90.342267051" watchObservedRunningTime="2026-01-23 11:54:06.200093998 +0000 UTC m=+90.369166234" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.200692 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qwf88" podStartSLOduration=71.200666232 podStartE2EDuration="1m11.200666232s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:06.200451578 +0000 UTC m=+90.369523804" watchObservedRunningTime="2026-01-23 11:54:06.200666232 +0000 UTC m=+90.369738478" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.213149 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.213189 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.213202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.213221 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.213234 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:06Z","lastTransitionTime":"2026-01-23T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.273750 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=72.27371253 podStartE2EDuration="1m12.27371253s" podCreationTimestamp="2026-01-23 11:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:06.228308464 +0000 UTC m=+90.397380700" watchObservedRunningTime="2026-01-23 11:54:06.27371253 +0000 UTC m=+90.442784786" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.274530 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=11.274517169 podStartE2EDuration="11.274517169s" podCreationTimestamp="2026-01-23 11:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:06.269431187 +0000 UTC m=+90.438503433" watchObservedRunningTime="2026-01-23 11:54:06.274517169 +0000 UTC m=+90.443589425" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.317304 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.317703 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.317877 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.317954 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.318016 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:06Z","lastTransitionTime":"2026-01-23T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.368442 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.368415575 podStartE2EDuration="35.368415575s" podCreationTimestamp="2026-01-23 11:53:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:06.351740066 +0000 UTC m=+90.520812293" watchObservedRunningTime="2026-01-23 11:54:06.368415575 +0000 UTC m=+90.537487801" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.394564 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-l5tpj" podStartSLOduration=72.39452926 podStartE2EDuration="1m12.39452926s" podCreationTimestamp="2026-01-23 11:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:06.381458067 +0000 UTC m=+90.550530293" watchObservedRunningTime="2026-01-23 11:54:06.39452926 +0000 UTC m=+90.563601486" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.409149 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podStartSLOduration=71.409131859 podStartE2EDuration="1m11.409131859s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:06.396539368 +0000 UTC m=+90.565611594" watchObservedRunningTime="2026-01-23 11:54:06.409131859 +0000 UTC m=+90.578204085" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.421374 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.421421 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.421431 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.421450 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.421461 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:06Z","lastTransitionTime":"2026-01-23T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.424858 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.424843535 podStartE2EDuration="1m11.424843535s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:06.424766233 +0000 UTC m=+90.593838479" watchObservedRunningTime="2026-01-23 11:54:06.424843535 +0000 UTC m=+90.593915761" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.477809 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wrntt" podStartSLOduration=71.477780842 podStartE2EDuration="1m11.477780842s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:06.477648679 +0000 UTC m=+90.646720905" watchObservedRunningTime="2026-01-23 11:54:06.477780842 +0000 UTC m=+90.646853068" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.505232 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-54mz5" podStartSLOduration=70.505202567 podStartE2EDuration="1m10.505202567s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:06.504020079 +0000 UTC m=+90.673092305" watchObservedRunningTime="2026-01-23 11:54:06.505202567 +0000 UTC m=+90.674274793" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.523370 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.523422 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.523432 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.523449 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.523460 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:06Z","lastTransitionTime":"2026-01-23T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.626185 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.626248 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.626270 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.626302 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.626325 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:06Z","lastTransitionTime":"2026-01-23T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.729499 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.729573 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.729593 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.729652 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.729676 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:06Z","lastTransitionTime":"2026-01-23T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.833580 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.833688 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.833706 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.833733 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.833752 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:06Z","lastTransitionTime":"2026-01-23T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.936230 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.936289 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.936309 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.936333 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:06 crc kubenswrapper[4865]: I0123 11:54:06.936349 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:06Z","lastTransitionTime":"2026-01-23T11:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.039338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.039403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.039422 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.039446 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.039466 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:07Z","lastTransitionTime":"2026-01-23T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.117498 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.117537 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.117710 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:07 crc kubenswrapper[4865]: E0123 11:54:07.117814 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:07 crc kubenswrapper[4865]: E0123 11:54:07.118221 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:07 crc kubenswrapper[4865]: E0123 11:54:07.118317 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.135716 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.143239 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.143322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.143344 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.143398 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.143418 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:07Z","lastTransitionTime":"2026-01-23T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.200814 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 18:55:09.478990164 +0000 UTC Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.253224 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.253338 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.253361 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.253403 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.253438 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:07Z","lastTransitionTime":"2026-01-23T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.357893 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.357969 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.357982 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.358032 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.358046 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:07Z","lastTransitionTime":"2026-01-23T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.462663 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.462764 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.462790 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.462825 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.462853 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:07Z","lastTransitionTime":"2026-01-23T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.566661 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.566737 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.566762 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.566797 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.566822 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:07Z","lastTransitionTime":"2026-01-23T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.671082 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.671167 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.671181 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.671479 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.671513 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:07Z","lastTransitionTime":"2026-01-23T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.775139 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.775188 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.775202 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.775226 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.775239 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:07Z","lastTransitionTime":"2026-01-23T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.877812 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.877899 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.877951 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.877977 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.877995 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:07Z","lastTransitionTime":"2026-01-23T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.982392 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.982432 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.982441 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.982457 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:07 crc kubenswrapper[4865]: I0123 11:54:07.982466 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:07Z","lastTransitionTime":"2026-01-23T11:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.086840 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.086940 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.086967 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.087001 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.087022 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:08Z","lastTransitionTime":"2026-01-23T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.118081 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:08 crc kubenswrapper[4865]: E0123 11:54:08.118357 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.191049 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.191114 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.191130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.191153 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.191171 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:08Z","lastTransitionTime":"2026-01-23T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.201654 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 20:35:46.39172887 +0000 UTC Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.294618 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.294657 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.294666 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.294680 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.294691 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:08Z","lastTransitionTime":"2026-01-23T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.397206 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.397247 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.397258 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.397274 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.397285 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:08Z","lastTransitionTime":"2026-01-23T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.500793 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.500846 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.500859 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.500881 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.500896 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:08Z","lastTransitionTime":"2026-01-23T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.604578 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.604745 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.604766 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.604803 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.604830 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:08Z","lastTransitionTime":"2026-01-23T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.708159 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.708235 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.708254 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.708282 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.708317 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:08Z","lastTransitionTime":"2026-01-23T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.810996 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.811088 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.811110 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.811140 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.811161 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:08Z","lastTransitionTime":"2026-01-23T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.915308 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.915387 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.915406 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.915438 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:08 crc kubenswrapper[4865]: I0123 11:54:08.915458 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:08Z","lastTransitionTime":"2026-01-23T11:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.019990 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.020067 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.020085 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.020112 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.020132 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:09Z","lastTransitionTime":"2026-01-23T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.117650 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.117714 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.117753 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:09 crc kubenswrapper[4865]: E0123 11:54:09.118152 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:09 crc kubenswrapper[4865]: E0123 11:54:09.118312 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.118439 4865 scope.go:117] "RemoveContainer" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 11:54:09 crc kubenswrapper[4865]: E0123 11:54:09.118503 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:09 crc kubenswrapper[4865]: E0123 11:54:09.118648 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.123582 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.123671 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.123690 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.123717 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.123736 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:09Z","lastTransitionTime":"2026-01-23T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.202026 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 10:29:21.793512566 +0000 UTC Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.226391 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.226440 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.226452 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.226473 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.226507 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:09Z","lastTransitionTime":"2026-01-23T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.330026 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.330078 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.330099 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.330130 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.330154 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:09Z","lastTransitionTime":"2026-01-23T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.434182 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.434272 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.434292 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.434322 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.434342 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:09Z","lastTransitionTime":"2026-01-23T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.538407 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.538482 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.538508 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.538545 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.538569 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:09Z","lastTransitionTime":"2026-01-23T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.643462 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.643539 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.643567 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.643635 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.643663 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:09Z","lastTransitionTime":"2026-01-23T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.714865 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.714960 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.714988 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.715029 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.715055 4865 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T11:54:09Z","lastTransitionTime":"2026-01-23T11:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.775178 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86"] Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.775804 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.781464 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.781475 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.781773 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.782159 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.808416 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0b0687e6-8eae-4755-a169-30e4d3ed8401-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.808530 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0687e6-8eae-4755-a169-30e4d3ed8401-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.808568 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b0687e6-8eae-4755-a169-30e4d3ed8401-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.808592 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b0687e6-8eae-4755-a169-30e4d3ed8401-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.808630 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0b0687e6-8eae-4755-a169-30e4d3ed8401-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.821943 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.821882198 podStartE2EDuration="2.821882198s" podCreationTimestamp="2026-01-23 11:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:09.818746383 +0000 UTC m=+93.987818609" watchObservedRunningTime="2026-01-23 11:54:09.821882198 +0000 UTC m=+93.990954434" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.910094 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0b0687e6-8eae-4755-a169-30e4d3ed8401-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.910144 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0687e6-8eae-4755-a169-30e4d3ed8401-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.910174 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b0687e6-8eae-4755-a169-30e4d3ed8401-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.910196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b0687e6-8eae-4755-a169-30e4d3ed8401-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.910211 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0b0687e6-8eae-4755-a169-30e4d3ed8401-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.910204 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0b0687e6-8eae-4755-a169-30e4d3ed8401-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.910272 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0b0687e6-8eae-4755-a169-30e4d3ed8401-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.911100 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b0687e6-8eae-4755-a169-30e4d3ed8401-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.920294 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b0687e6-8eae-4755-a169-30e4d3ed8401-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:09 crc kubenswrapper[4865]: I0123 11:54:09.927524 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b0687e6-8eae-4755-a169-30e4d3ed8401-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x9z86\" (UID: \"0b0687e6-8eae-4755-a169-30e4d3ed8401\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:10 crc kubenswrapper[4865]: I0123 11:54:10.100304 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" Jan 23 11:54:10 crc kubenswrapper[4865]: I0123 11:54:10.117640 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:10 crc kubenswrapper[4865]: E0123 11:54:10.118141 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:10 crc kubenswrapper[4865]: I0123 11:54:10.202172 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 12:47:35.14158768 +0000 UTC Jan 23 11:54:10 crc kubenswrapper[4865]: I0123 11:54:10.203752 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 11:54:10 crc kubenswrapper[4865]: I0123 11:54:10.215167 4865 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 11:54:10 crc kubenswrapper[4865]: I0123 11:54:10.735214 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" event={"ID":"0b0687e6-8eae-4755-a169-30e4d3ed8401","Type":"ContainerStarted","Data":"8a15162989752302a03a1f95231e23ebb16eead506304ae975dae6b91d278744"} Jan 23 11:54:10 crc kubenswrapper[4865]: I0123 11:54:10.735296 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" event={"ID":"0b0687e6-8eae-4755-a169-30e4d3ed8401","Type":"ContainerStarted","Data":"aaf95f55321e745a9c31a00f1f486fa87051d33e36e6b83a24e28136e6ff1e48"} Jan 23 11:54:10 crc kubenswrapper[4865]: I0123 11:54:10.758858 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" podStartSLOduration=75.758829101 podStartE2EDuration="1m15.758829101s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:10.757326465 +0000 UTC m=+94.926398701" watchObservedRunningTime="2026-01-23 11:54:10.758829101 +0000 UTC m=+94.927901357" Jan 23 11:54:11 crc kubenswrapper[4865]: I0123 11:54:11.117379 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:11 crc kubenswrapper[4865]: I0123 11:54:11.117389 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:11 crc kubenswrapper[4865]: E0123 11:54:11.117587 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:11 crc kubenswrapper[4865]: E0123 11:54:11.117710 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:11 crc kubenswrapper[4865]: I0123 11:54:11.117389 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:11 crc kubenswrapper[4865]: E0123 11:54:11.117804 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:12 crc kubenswrapper[4865]: I0123 11:54:12.117572 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:12 crc kubenswrapper[4865]: E0123 11:54:12.118753 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:13 crc kubenswrapper[4865]: I0123 11:54:13.117734 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:13 crc kubenswrapper[4865]: I0123 11:54:13.117780 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:13 crc kubenswrapper[4865]: I0123 11:54:13.117785 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:13 crc kubenswrapper[4865]: E0123 11:54:13.119025 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:13 crc kubenswrapper[4865]: E0123 11:54:13.119082 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:13 crc kubenswrapper[4865]: E0123 11:54:13.118796 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:13 crc kubenswrapper[4865]: I0123 11:54:13.650378 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:13 crc kubenswrapper[4865]: E0123 11:54:13.650650 4865 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:54:13 crc kubenswrapper[4865]: E0123 11:54:13.650831 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs podName:a15fb93f-eb63-4a8c-bec6-20bed7300dca nodeName:}" failed. No retries permitted until 2026-01-23 11:55:17.650797504 +0000 UTC m=+161.819869770 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs") pod "network-metrics-daemon-n76rp" (UID: "a15fb93f-eb63-4a8c-bec6-20bed7300dca") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 11:54:14 crc kubenswrapper[4865]: I0123 11:54:14.117924 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:14 crc kubenswrapper[4865]: E0123 11:54:14.118149 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:15 crc kubenswrapper[4865]: I0123 11:54:15.117192 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:15 crc kubenswrapper[4865]: I0123 11:54:15.117266 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:15 crc kubenswrapper[4865]: E0123 11:54:15.117319 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:15 crc kubenswrapper[4865]: I0123 11:54:15.117369 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:15 crc kubenswrapper[4865]: E0123 11:54:15.117440 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:15 crc kubenswrapper[4865]: E0123 11:54:15.117508 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:16 crc kubenswrapper[4865]: I0123 11:54:16.117988 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:16 crc kubenswrapper[4865]: E0123 11:54:16.119485 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:17 crc kubenswrapper[4865]: I0123 11:54:17.118136 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:17 crc kubenswrapper[4865]: I0123 11:54:17.118271 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:17 crc kubenswrapper[4865]: I0123 11:54:17.118134 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:17 crc kubenswrapper[4865]: E0123 11:54:17.118321 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:17 crc kubenswrapper[4865]: E0123 11:54:17.118475 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:17 crc kubenswrapper[4865]: E0123 11:54:17.118730 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:18 crc kubenswrapper[4865]: I0123 11:54:18.117735 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:18 crc kubenswrapper[4865]: E0123 11:54:18.117862 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:19 crc kubenswrapper[4865]: I0123 11:54:19.117227 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:19 crc kubenswrapper[4865]: I0123 11:54:19.117281 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:19 crc kubenswrapper[4865]: I0123 11:54:19.117365 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:19 crc kubenswrapper[4865]: E0123 11:54:19.117463 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:19 crc kubenswrapper[4865]: E0123 11:54:19.117561 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:19 crc kubenswrapper[4865]: E0123 11:54:19.117671 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:20 crc kubenswrapper[4865]: I0123 11:54:20.117915 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:20 crc kubenswrapper[4865]: E0123 11:54:20.118070 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:21 crc kubenswrapper[4865]: I0123 11:54:21.117135 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:21 crc kubenswrapper[4865]: I0123 11:54:21.117135 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:21 crc kubenswrapper[4865]: E0123 11:54:21.117261 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:21 crc kubenswrapper[4865]: E0123 11:54:21.117308 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:21 crc kubenswrapper[4865]: I0123 11:54:21.117147 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:21 crc kubenswrapper[4865]: E0123 11:54:21.117401 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:22 crc kubenswrapper[4865]: I0123 11:54:22.117864 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:22 crc kubenswrapper[4865]: E0123 11:54:22.118000 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:23 crc kubenswrapper[4865]: I0123 11:54:23.117458 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:23 crc kubenswrapper[4865]: I0123 11:54:23.117465 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:23 crc kubenswrapper[4865]: I0123 11:54:23.118000 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:23 crc kubenswrapper[4865]: E0123 11:54:23.117994 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:23 crc kubenswrapper[4865]: E0123 11:54:23.118154 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:23 crc kubenswrapper[4865]: E0123 11:54:23.118274 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:23 crc kubenswrapper[4865]: I0123 11:54:23.119314 4865 scope.go:117] "RemoveContainer" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 11:54:23 crc kubenswrapper[4865]: E0123 11:54:23.119680 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-68shs_openshift-ovn-kubernetes(4ea3549b-3898-4d82-8240-2e062b4a6046)\"" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" Jan 23 11:54:24 crc kubenswrapper[4865]: I0123 11:54:24.117417 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:24 crc kubenswrapper[4865]: E0123 11:54:24.117684 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:25 crc kubenswrapper[4865]: I0123 11:54:25.117479 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:25 crc kubenswrapper[4865]: I0123 11:54:25.117479 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:25 crc kubenswrapper[4865]: I0123 11:54:25.117591 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:25 crc kubenswrapper[4865]: E0123 11:54:25.117646 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:25 crc kubenswrapper[4865]: E0123 11:54:25.117862 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:25 crc kubenswrapper[4865]: E0123 11:54:25.117967 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:26 crc kubenswrapper[4865]: I0123 11:54:26.117802 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:26 crc kubenswrapper[4865]: E0123 11:54:26.119050 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:27 crc kubenswrapper[4865]: I0123 11:54:27.117657 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:27 crc kubenswrapper[4865]: I0123 11:54:27.117917 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:27 crc kubenswrapper[4865]: I0123 11:54:27.118038 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:27 crc kubenswrapper[4865]: E0123 11:54:27.118236 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:27 crc kubenswrapper[4865]: E0123 11:54:27.118697 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:27 crc kubenswrapper[4865]: E0123 11:54:27.118899 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:28 crc kubenswrapper[4865]: I0123 11:54:28.117851 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:28 crc kubenswrapper[4865]: E0123 11:54:28.118136 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:29 crc kubenswrapper[4865]: I0123 11:54:29.117949 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:29 crc kubenswrapper[4865]: E0123 11:54:29.118121 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:29 crc kubenswrapper[4865]: I0123 11:54:29.118271 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:29 crc kubenswrapper[4865]: E0123 11:54:29.118437 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:29 crc kubenswrapper[4865]: I0123 11:54:29.118496 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:29 crc kubenswrapper[4865]: E0123 11:54:29.118595 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:29 crc kubenswrapper[4865]: I0123 11:54:29.805380 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cb8rs_b3d06336-44ac-4c17-899b-28cbfe2ee64d/kube-multus/1.log" Jan 23 11:54:29 crc kubenswrapper[4865]: I0123 11:54:29.806719 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cb8rs_b3d06336-44ac-4c17-899b-28cbfe2ee64d/kube-multus/0.log" Jan 23 11:54:29 crc kubenswrapper[4865]: I0123 11:54:29.806765 4865 generic.go:334] "Generic (PLEG): container finished" podID="b3d06336-44ac-4c17-899b-28cbfe2ee64d" containerID="9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06" exitCode=1 Jan 23 11:54:29 crc kubenswrapper[4865]: I0123 11:54:29.806797 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cb8rs" event={"ID":"b3d06336-44ac-4c17-899b-28cbfe2ee64d","Type":"ContainerDied","Data":"9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06"} Jan 23 11:54:29 crc kubenswrapper[4865]: I0123 11:54:29.806835 4865 scope.go:117] "RemoveContainer" containerID="6191f6bded990f42273ff7bb5e6f92be70a69d6c19e432f0d156d3d0c498b904" Jan 23 11:54:29 crc kubenswrapper[4865]: I0123 11:54:29.807476 4865 scope.go:117] "RemoveContainer" containerID="9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06" Jan 23 11:54:29 crc kubenswrapper[4865]: E0123 11:54:29.807799 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-cb8rs_openshift-multus(b3d06336-44ac-4c17-899b-28cbfe2ee64d)\"" pod="openshift-multus/multus-cb8rs" podUID="b3d06336-44ac-4c17-899b-28cbfe2ee64d" Jan 23 11:54:30 crc kubenswrapper[4865]: I0123 11:54:30.117727 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:30 crc kubenswrapper[4865]: E0123 11:54:30.118035 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:30 crc kubenswrapper[4865]: I0123 11:54:30.813567 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cb8rs_b3d06336-44ac-4c17-899b-28cbfe2ee64d/kube-multus/1.log" Jan 23 11:54:31 crc kubenswrapper[4865]: I0123 11:54:31.118004 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:31 crc kubenswrapper[4865]: E0123 11:54:31.118167 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:31 crc kubenswrapper[4865]: I0123 11:54:31.118448 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:31 crc kubenswrapper[4865]: E0123 11:54:31.118535 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:31 crc kubenswrapper[4865]: I0123 11:54:31.119044 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:31 crc kubenswrapper[4865]: E0123 11:54:31.119389 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:32 crc kubenswrapper[4865]: I0123 11:54:32.120011 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:32 crc kubenswrapper[4865]: E0123 11:54:32.120533 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:33 crc kubenswrapper[4865]: I0123 11:54:33.117763 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:33 crc kubenswrapper[4865]: I0123 11:54:33.117779 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:33 crc kubenswrapper[4865]: E0123 11:54:33.117866 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:33 crc kubenswrapper[4865]: E0123 11:54:33.117934 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:33 crc kubenswrapper[4865]: I0123 11:54:33.118205 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:33 crc kubenswrapper[4865]: E0123 11:54:33.118440 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:34 crc kubenswrapper[4865]: I0123 11:54:34.118069 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:34 crc kubenswrapper[4865]: E0123 11:54:34.118228 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:35 crc kubenswrapper[4865]: I0123 11:54:35.117351 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:35 crc kubenswrapper[4865]: E0123 11:54:35.117486 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:35 crc kubenswrapper[4865]: I0123 11:54:35.117496 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:35 crc kubenswrapper[4865]: E0123 11:54:35.117691 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:35 crc kubenswrapper[4865]: I0123 11:54:35.118006 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:35 crc kubenswrapper[4865]: E0123 11:54:35.118302 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:36 crc kubenswrapper[4865]: I0123 11:54:36.117639 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:36 crc kubenswrapper[4865]: E0123 11:54:36.120220 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:36 crc kubenswrapper[4865]: E0123 11:54:36.150280 4865 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 11:54:36 crc kubenswrapper[4865]: E0123 11:54:36.221673 4865 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 11:54:37 crc kubenswrapper[4865]: I0123 11:54:37.117069 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:37 crc kubenswrapper[4865]: I0123 11:54:37.117102 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:37 crc kubenswrapper[4865]: E0123 11:54:37.117667 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:37 crc kubenswrapper[4865]: E0123 11:54:37.117738 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:37 crc kubenswrapper[4865]: I0123 11:54:37.117905 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:37 crc kubenswrapper[4865]: I0123 11:54:37.117933 4865 scope.go:117] "RemoveContainer" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 11:54:37 crc kubenswrapper[4865]: E0123 11:54:37.118292 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:37 crc kubenswrapper[4865]: I0123 11:54:37.837172 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/3.log" Jan 23 11:54:37 crc kubenswrapper[4865]: I0123 11:54:37.840464 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerStarted","Data":"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd"} Jan 23 11:54:37 crc kubenswrapper[4865]: I0123 11:54:37.840951 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:54:37 crc kubenswrapper[4865]: I0123 11:54:37.924355 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podStartSLOduration=102.924337264 podStartE2EDuration="1m42.924337264s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:37.871191283 +0000 UTC m=+122.040263519" watchObservedRunningTime="2026-01-23 11:54:37.924337264 +0000 UTC m=+122.093409490" Jan 23 11:54:37 crc kubenswrapper[4865]: I0123 11:54:37.925173 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n76rp"] Jan 23 11:54:37 crc kubenswrapper[4865]: I0123 11:54:37.925271 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:37 crc kubenswrapper[4865]: E0123 11:54:37.925348 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:39 crc kubenswrapper[4865]: I0123 11:54:39.117522 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:39 crc kubenswrapper[4865]: I0123 11:54:39.117596 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:39 crc kubenswrapper[4865]: E0123 11:54:39.117729 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:39 crc kubenswrapper[4865]: E0123 11:54:39.117872 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:39 crc kubenswrapper[4865]: I0123 11:54:39.117926 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:39 crc kubenswrapper[4865]: E0123 11:54:39.118074 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:40 crc kubenswrapper[4865]: I0123 11:54:40.117454 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:40 crc kubenswrapper[4865]: E0123 11:54:40.117673 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:40 crc kubenswrapper[4865]: I0123 11:54:40.118230 4865 scope.go:117] "RemoveContainer" containerID="9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06" Jan 23 11:54:40 crc kubenswrapper[4865]: I0123 11:54:40.861833 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cb8rs_b3d06336-44ac-4c17-899b-28cbfe2ee64d/kube-multus/1.log" Jan 23 11:54:40 crc kubenswrapper[4865]: I0123 11:54:40.862221 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cb8rs" event={"ID":"b3d06336-44ac-4c17-899b-28cbfe2ee64d","Type":"ContainerStarted","Data":"00a6f6797638587efdf93cfa4a2c2f18b2ee85067681c5118db013881e63b8a4"} Jan 23 11:54:41 crc kubenswrapper[4865]: I0123 11:54:41.117386 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:41 crc kubenswrapper[4865]: I0123 11:54:41.117425 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:41 crc kubenswrapper[4865]: I0123 11:54:41.117455 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:41 crc kubenswrapper[4865]: E0123 11:54:41.117564 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:41 crc kubenswrapper[4865]: E0123 11:54:41.117798 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:41 crc kubenswrapper[4865]: E0123 11:54:41.117908 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:41 crc kubenswrapper[4865]: E0123 11:54:41.223445 4865 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 11:54:42 crc kubenswrapper[4865]: I0123 11:54:42.118123 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:42 crc kubenswrapper[4865]: E0123 11:54:42.118305 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:43 crc kubenswrapper[4865]: I0123 11:54:43.117397 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:43 crc kubenswrapper[4865]: I0123 11:54:43.117444 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:43 crc kubenswrapper[4865]: I0123 11:54:43.117417 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:43 crc kubenswrapper[4865]: E0123 11:54:43.117572 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:43 crc kubenswrapper[4865]: E0123 11:54:43.117688 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:43 crc kubenswrapper[4865]: E0123 11:54:43.117794 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:44 crc kubenswrapper[4865]: I0123 11:54:44.117232 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:44 crc kubenswrapper[4865]: E0123 11:54:44.117447 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:45 crc kubenswrapper[4865]: I0123 11:54:45.117725 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:45 crc kubenswrapper[4865]: I0123 11:54:45.117724 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:45 crc kubenswrapper[4865]: I0123 11:54:45.117733 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:45 crc kubenswrapper[4865]: E0123 11:54:45.117981 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 11:54:45 crc kubenswrapper[4865]: E0123 11:54:45.117844 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 11:54:45 crc kubenswrapper[4865]: E0123 11:54:45.118053 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 11:54:46 crc kubenswrapper[4865]: I0123 11:54:46.117087 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:46 crc kubenswrapper[4865]: E0123 11:54:46.118971 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-n76rp" podUID="a15fb93f-eb63-4a8c-bec6-20bed7300dca" Jan 23 11:54:47 crc kubenswrapper[4865]: I0123 11:54:47.117639 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:54:47 crc kubenswrapper[4865]: I0123 11:54:47.117670 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:54:47 crc kubenswrapper[4865]: I0123 11:54:47.117707 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:54:47 crc kubenswrapper[4865]: I0123 11:54:47.119961 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 11:54:47 crc kubenswrapper[4865]: I0123 11:54:47.120010 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 11:54:47 crc kubenswrapper[4865]: I0123 11:54:47.120722 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 11:54:47 crc kubenswrapper[4865]: I0123 11:54:47.121699 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 11:54:48 crc kubenswrapper[4865]: I0123 11:54:48.118216 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:54:48 crc kubenswrapper[4865]: I0123 11:54:48.122280 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 11:54:48 crc kubenswrapper[4865]: I0123 11:54:48.124211 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.589982 4865 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.685415 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxj4k"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.686197 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.687731 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hrzcb"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.688765 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88ktq"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.688827 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.689196 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.691855 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r8fk2"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.693029 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.696034 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.696495 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.700975 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.701045 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.700977 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.702876 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.705995 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.706279 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.706391 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.706394 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.706481 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.712949 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.713054 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.713593 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.713591 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.718866 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.719441 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.720369 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.721629 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.723776 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.724037 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.724210 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.724404 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.724444 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.726621 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.738321 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.738412 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.738540 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.740765 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.741241 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.746191 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.746692 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.746968 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.747211 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.747237 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.747426 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.747490 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.748318 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.748620 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.749062 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8lsbn"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.749629 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.750047 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.750126 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.754622 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.755631 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-znx59"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.756171 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.756427 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.774847 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.775112 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.776126 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-bpdjt"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.776586 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.776763 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.777169 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gzd5x"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.777542 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.777581 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.779774 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-48b72"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.782578 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gk4fh"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.783706 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.783954 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.784079 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785242 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785281 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-976w6\" (UniqueName: \"kubernetes.io/projected/d5137707-0cd3-4f39-9d73-3401d315e827-kube-api-access-976w6\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785309 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brv2r\" (UniqueName: \"kubernetes.io/projected/0cab2dc0-42b2-4029-8388-b20c287698bc-kube-api-access-brv2r\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785334 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cab2dc0-42b2-4029-8388-b20c287698bc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785355 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-etcd-serving-ca\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785379 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7t6k\" (UniqueName: \"kubernetes.io/projected/2a66fefc-bc9d-4922-821a-63e84b87e740-kube-api-access-x7t6k\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785400 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5137707-0cd3-4f39-9d73-3401d315e827-serving-cert\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785419 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a66fefc-bc9d-4922-821a-63e84b87e740-auth-proxy-config\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785452 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-client-ca\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785474 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-client-ca\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785502 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0cab2dc0-42b2-4029-8388-b20c287698bc-encryption-config\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785522 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-etcd-client\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785543 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjzms\" (UniqueName: \"kubernetes.io/projected/131a949e-2d37-47b4-8d7e-1f1e1afb9283-kube-api-access-rjzms\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785567 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0cab2dc0-42b2-4029-8388-b20c287698bc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785586 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cab2dc0-42b2-4029-8388-b20c287698bc-serving-cert\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785633 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cab2dc0-42b2-4029-8388-b20c287698bc-audit-dir\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785664 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-audit\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785686 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-audit-dir\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785706 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7r6t\" (UniqueName: \"kubernetes.io/projected/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-kube-api-access-w7r6t\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785726 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/131a949e-2d37-47b4-8d7e-1f1e1afb9283-serving-cert\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785746 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8896518-4b5b-4712-9994-0bb445a3504f-serving-cert\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785766 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2a66fefc-bc9d-4922-821a-63e84b87e740-machine-approver-tls\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785785 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-encryption-config\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785806 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cab2dc0-42b2-4029-8388-b20c287698bc-audit-policies\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785829 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/141f6171-3d39-421b-98f4-6accc5d30ae2-serving-cert\") pod \"openshift-config-operator-7777fb866f-znx59\" (UID: \"141f6171-3d39-421b-98f4-6accc5d30ae2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785852 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8896518-4b5b-4712-9994-0bb445a3504f-config\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785879 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785905 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe7c397-99ae-494d-a418-b0f08568f156-config\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785932 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8896518-4b5b-4712-9994-0bb445a3504f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785964 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-config\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.785997 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8896518-4b5b-4712-9994-0bb445a3504f-service-ca-bundle\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786028 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dab59711-c8b1-43f3-a608-0f892c43ac60-images\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786058 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dab59711-c8b1-43f3-a608-0f892c43ac60-config\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786088 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/141f6171-3d39-421b-98f4-6accc5d30ae2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-znx59\" (UID: \"141f6171-3d39-421b-98f4-6accc5d30ae2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786120 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmlp\" (UniqueName: \"kubernetes.io/projected/141f6171-3d39-421b-98f4-6accc5d30ae2-kube-api-access-4fmlp\") pod \"openshift-config-operator-7777fb866f-znx59\" (UID: \"141f6171-3d39-421b-98f4-6accc5d30ae2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786148 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a66fefc-bc9d-4922-821a-63e84b87e740-config\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786180 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4mgd\" (UniqueName: \"kubernetes.io/projected/cfe7c397-99ae-494d-a418-b0f08568f156-kube-api-access-z4mgd\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786209 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-config\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786254 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cab2dc0-42b2-4029-8388-b20c287698bc-etcd-client\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786284 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfe7c397-99ae-494d-a418-b0f08568f156-trusted-ca\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786318 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dab59711-c8b1-43f3-a608-0f892c43ac60-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786359 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-config\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786388 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-serving-cert\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786416 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v22q4\" (UniqueName: \"kubernetes.io/projected/dab59711-c8b1-43f3-a608-0f892c43ac60-kube-api-access-v22q4\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786446 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe7c397-99ae-494d-a418-b0f08568f156-serving-cert\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786472 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-image-import-ca\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786510 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcjgd\" (UniqueName: \"kubernetes.io/projected/c8896518-4b5b-4712-9994-0bb445a3504f-kube-api-access-vcjgd\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.786532 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-node-pullsecrets\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.787386 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ph28"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.787813 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.788075 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.793404 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.793903 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.794082 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.794283 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.800781 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gc69w"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.801315 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.813179 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.813420 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.813578 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.813719 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.813893 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.814009 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.814341 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.814505 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.814648 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.814724 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.814796 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.814875 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.814946 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.815013 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.815098 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.815168 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.815231 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.815300 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.815693 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.815828 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.815945 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.816048 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.816121 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.816157 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.816199 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.813935 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.816270 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.816282 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.816349 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.816378 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.816351 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.816474 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.818704 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.819218 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.824689 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.824865 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.824984 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.825129 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.825657 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.826637 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.828100 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.828245 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.828387 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.828483 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.828560 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.832085 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.832346 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.849964 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.850259 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.850639 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.852028 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.852049 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.852335 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.852536 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.858143 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.858264 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.860853 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.862842 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.852340 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.868979 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.883523 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-swk7h"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.884028 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.884389 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.885237 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.885829 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.886074 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.887084 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dab59711-c8b1-43f3-a608-0f892c43ac60-images\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.889657 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8896518-4b5b-4712-9994-0bb445a3504f-service-ca-bundle\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.889767 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/141f6171-3d39-421b-98f4-6accc5d30ae2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-znx59\" (UID: \"141f6171-3d39-421b-98f4-6accc5d30ae2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.889847 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmlp\" (UniqueName: \"kubernetes.io/projected/141f6171-3d39-421b-98f4-6accc5d30ae2-kube-api-access-4fmlp\") pod \"openshift-config-operator-7777fb866f-znx59\" (UID: \"141f6171-3d39-421b-98f4-6accc5d30ae2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.889928 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a66fefc-bc9d-4922-821a-63e84b87e740-config\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890009 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dab59711-c8b1-43f3-a608-0f892c43ac60-config\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890093 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4mgd\" (UniqueName: \"kubernetes.io/projected/cfe7c397-99ae-494d-a418-b0f08568f156-kube-api-access-z4mgd\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890392 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-config\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890487 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cab2dc0-42b2-4029-8388-b20c287698bc-etcd-client\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890578 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dab59711-c8b1-43f3-a608-0f892c43ac60-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890692 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfe7c397-99ae-494d-a418-b0f08568f156-trusted-ca\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890752 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dab59711-c8b1-43f3-a608-0f892c43ac60-images\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890842 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-config\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890926 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-serving-cert\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891019 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v22q4\" (UniqueName: \"kubernetes.io/projected/dab59711-c8b1-43f3-a608-0f892c43ac60-kube-api-access-v22q4\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891094 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe7c397-99ae-494d-a418-b0f08568f156-serving-cert\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891175 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-image-import-ca\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891270 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-node-pullsecrets\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891347 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcjgd\" (UniqueName: \"kubernetes.io/projected/c8896518-4b5b-4712-9994-0bb445a3504f-kube-api-access-vcjgd\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891433 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891539 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-976w6\" (UniqueName: \"kubernetes.io/projected/d5137707-0cd3-4f39-9d73-3401d315e827-kube-api-access-976w6\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891655 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brv2r\" (UniqueName: \"kubernetes.io/projected/0cab2dc0-42b2-4029-8388-b20c287698bc-kube-api-access-brv2r\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891750 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cab2dc0-42b2-4029-8388-b20c287698bc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891834 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-etcd-serving-ca\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891916 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7t6k\" (UniqueName: \"kubernetes.io/projected/2a66fefc-bc9d-4922-821a-63e84b87e740-kube-api-access-x7t6k\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892008 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5137707-0cd3-4f39-9d73-3401d315e827-serving-cert\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892092 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a66fefc-bc9d-4922-821a-63e84b87e740-auth-proxy-config\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892177 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-client-ca\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892268 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-client-ca\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892357 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0cab2dc0-42b2-4029-8388-b20c287698bc-encryption-config\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892443 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-etcd-client\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892532 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0cab2dc0-42b2-4029-8388-b20c287698bc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892629 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjzms\" (UniqueName: \"kubernetes.io/projected/131a949e-2d37-47b4-8d7e-1f1e1afb9283-kube-api-access-rjzms\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892730 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cab2dc0-42b2-4029-8388-b20c287698bc-serving-cert\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892906 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cab2dc0-42b2-4029-8388-b20c287698bc-audit-dir\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892994 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-audit\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893075 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-audit-dir\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893161 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8896518-4b5b-4712-9994-0bb445a3504f-serving-cert\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893243 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2a66fefc-bc9d-4922-821a-63e84b87e740-machine-approver-tls\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893332 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-encryption-config\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893411 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7r6t\" (UniqueName: \"kubernetes.io/projected/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-kube-api-access-w7r6t\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893488 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/131a949e-2d37-47b4-8d7e-1f1e1afb9283-serving-cert\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893627 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cab2dc0-42b2-4029-8388-b20c287698bc-audit-policies\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893732 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/141f6171-3d39-421b-98f4-6accc5d30ae2-serving-cert\") pod \"openshift-config-operator-7777fb866f-znx59\" (UID: \"141f6171-3d39-421b-98f4-6accc5d30ae2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893831 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8896518-4b5b-4712-9994-0bb445a3504f-config\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893948 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8896518-4b5b-4712-9994-0bb445a3504f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.894080 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.894177 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe7c397-99ae-494d-a418-b0f08568f156-config\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.894251 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-config\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.895963 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-config\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.897212 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-trusted-ca-bundle\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.898009 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cab2dc0-42b2-4029-8388-b20c287698bc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.898576 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-etcd-serving-ca\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.902926 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dab59711-c8b1-43f3-a608-0f892c43ac60-config\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.903359 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a66fefc-bc9d-4922-821a-63e84b87e740-config\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.903415 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-audit-dir\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.903838 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a66fefc-bc9d-4922-821a-63e84b87e740-auth-proxy-config\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892089 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.904909 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.905725 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-client-ca\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892176 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-node-pullsecrets\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.906368 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-client-ca\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.906457 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-serving-cert\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.906710 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-image-import-ca\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.907469 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cfe7c397-99ae-494d-a418-b0f08568f156-trusted-ca\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.907752 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-config\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.907821 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfe7c397-99ae-494d-a418-b0f08568f156-serving-cert\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.907888 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-config\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.908165 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0cab2dc0-42b2-4029-8388-b20c287698bc-etcd-client\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.908214 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cab2dc0-42b2-4029-8388-b20c287698bc-audit-dir\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.893656 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8896518-4b5b-4712-9994-0bb445a3504f-service-ca-bundle\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.910025 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.910574 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.910899 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.911143 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.913787 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0cab2dc0-42b2-4029-8388-b20c287698bc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.915082 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.894078 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/141f6171-3d39-421b-98f4-6accc5d30ae2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-znx59\" (UID: \"141f6171-3d39-421b-98f4-6accc5d30ae2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.915619 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfe7c397-99ae-494d-a418-b0f08568f156-config\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.889085 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.889202 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.889680 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890813 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890875 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.890908 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.891915 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.925117 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8896518-4b5b-4712-9994-0bb445a3504f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892059 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.892915 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.925502 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/dab59711-c8b1-43f3-a608-0f892c43ac60-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.914062 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.925865 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0cab2dc0-42b2-4029-8388-b20c287698bc-encryption-config\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.925978 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-audit\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.926049 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cab2dc0-42b2-4029-8388-b20c287698bc-audit-policies\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.927959 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2a66fefc-bc9d-4922-821a-63e84b87e740-machine-approver-tls\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.929077 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8896518-4b5b-4712-9994-0bb445a3504f-config\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.936335 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/141f6171-3d39-421b-98f4-6accc5d30ae2-serving-cert\") pod \"openshift-config-operator-7777fb866f-znx59\" (UID: \"141f6171-3d39-421b-98f4-6accc5d30ae2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.936687 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8896518-4b5b-4712-9994-0bb445a3504f-serving-cert\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.936700 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5137707-0cd3-4f39-9d73-3401d315e827-serving-cert\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.937631 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-encryption-config\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.938151 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cab2dc0-42b2-4029-8388-b20c287698bc-serving-cert\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.939561 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.943247 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/131a949e-2d37-47b4-8d7e-1f1e1afb9283-serving-cert\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.944976 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-etcd-client\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.946981 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.948476 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.950158 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.954417 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-78x6m"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.954774 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.954934 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.959154 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.960407 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.960980 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.963201 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.963829 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.964836 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88ktq"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.965943 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mwzzv"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.966390 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.967585 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.968387 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.969361 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.970844 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxj4k"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.980777 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.983930 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.984761 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.985277 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.985638 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.985796 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.985894 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8mx4s"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.986290 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.986912 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7hq88"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.987464 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.988029 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.988112 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.988640 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.989378 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hrzcb"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.990406 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r8fk2"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.991352 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.993272 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gk4fh"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.994353 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8lsbn"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.995618 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.996672 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.997762 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ph28"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.998778 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bpdjt"] Jan 23 11:54:50 crc kubenswrapper[4865]: I0123 11:54:50.999755 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.001054 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.002738 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.002757 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g7l9x"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.003639 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.003748 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-znx59"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.005025 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.005989 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.006962 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cshcx"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.007486 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.008272 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.008462 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.009055 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-48b72"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.009857 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bx4k5"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.011423 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.011585 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bx4k5" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.011889 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.012890 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gzd5x"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.014461 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.015228 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.016542 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.017769 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-78x6m"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.018902 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mwzzv"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.019942 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.028469 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g7l9x"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.029408 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.032841 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7hq88"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.034876 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gc69w"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.035645 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8mx4s"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.037364 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.039012 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.039033 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.040332 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.041430 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.042638 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bx4k5"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.044655 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-z28b7"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.045802 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z28b7"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.045878 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.048342 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.069295 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.088045 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.110240 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.128570 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.149582 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.168670 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.188393 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.229035 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.249830 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.269237 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.288703 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.308705 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.328680 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.348581 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.369637 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.388111 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.446650 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmlp\" (UniqueName: \"kubernetes.io/projected/141f6171-3d39-421b-98f4-6accc5d30ae2-kube-api-access-4fmlp\") pod \"openshift-config-operator-7777fb866f-znx59\" (UID: \"141f6171-3d39-421b-98f4-6accc5d30ae2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.469274 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcjgd\" (UniqueName: \"kubernetes.io/projected/c8896518-4b5b-4712-9994-0bb445a3504f-kube-api-access-vcjgd\") pod \"authentication-operator-69f744f599-hrzcb\" (UID: \"c8896518-4b5b-4712-9994-0bb445a3504f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.484489 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-976w6\" (UniqueName: \"kubernetes.io/projected/d5137707-0cd3-4f39-9d73-3401d315e827-kube-api-access-976w6\") pod \"route-controller-manager-6576b87f9c-7ddsc\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.511082 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brv2r\" (UniqueName: \"kubernetes.io/projected/0cab2dc0-42b2-4029-8388-b20c287698bc-kube-api-access-brv2r\") pod \"apiserver-7bbb656c7d-thkng\" (UID: \"0cab2dc0-42b2-4029-8388-b20c287698bc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.523250 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7t6k\" (UniqueName: \"kubernetes.io/projected/2a66fefc-bc9d-4922-821a-63e84b87e740-kube-api-access-x7t6k\") pod \"machine-approver-56656f9798-fmgw6\" (UID: \"2a66fefc-bc9d-4922-821a-63e84b87e740\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.527770 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.549944 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.569093 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4mgd\" (UniqueName: \"kubernetes.io/projected/cfe7c397-99ae-494d-a418-b0f08568f156-kube-api-access-z4mgd\") pod \"console-operator-58897d9998-8lsbn\" (UID: \"cfe7c397-99ae-494d-a418-b0f08568f156\") " pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.585441 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v22q4\" (UniqueName: \"kubernetes.io/projected/dab59711-c8b1-43f3-a608-0f892c43ac60-kube-api-access-v22q4\") pod \"machine-api-operator-5694c8668f-hxj4k\" (UID: \"dab59711-c8b1-43f3-a608-0f892c43ac60\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.588220 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.608682 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.614825 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.630771 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.649514 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.670055 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.677801 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.699250 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.717169 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.736127 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.736521 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.755265 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.763123 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjzms\" (UniqueName: \"kubernetes.io/projected/131a949e-2d37-47b4-8d7e-1f1e1afb9283-kube-api-access-rjzms\") pod \"controller-manager-879f6c89f-88ktq\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.765103 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.768946 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.775150 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7r6t\" (UniqueName: \"kubernetes.io/projected/51f498e1-f13f-4977-a3e3-ea8bc6b75c6f-kube-api-access-w7r6t\") pod \"apiserver-76f77b778f-r8fk2\" (UID: \"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f\") " pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.790906 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.811367 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 11:54:51 crc kubenswrapper[4865]: W0123 11:54:51.819307 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a66fefc_bc9d_4922_821a_63e84b87e740.slice/crio-61b166565f1729bdacba0e84132507d77b5612072d55801c6586e0e348170dc2 WatchSource:0}: Error finding container 61b166565f1729bdacba0e84132507d77b5612072d55801c6586e0e348170dc2: Status 404 returned error can't find the container with id 61b166565f1729bdacba0e84132507d77b5612072d55801c6586e0e348170dc2 Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.831182 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.850161 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.868670 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.875064 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-znx59"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.875116 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hxj4k"] Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.890831 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 11:54:51 crc kubenswrapper[4865]: W0123 11:54:51.903060 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod141f6171_3d39_421b_98f4_6accc5d30ae2.slice/crio-35d008bf11c8a99daac62cd5f7dca3f6899e758054f8e0de898ed211086d18b7 WatchSource:0}: Error finding container 35d008bf11c8a99daac62cd5f7dca3f6899e758054f8e0de898ed211086d18b7: Status 404 returned error can't find the container with id 35d008bf11c8a99daac62cd5f7dca3f6899e758054f8e0de898ed211086d18b7 Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.905739 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" event={"ID":"2a66fefc-bc9d-4922-821a-63e84b87e740","Type":"ContainerStarted","Data":"61b166565f1729bdacba0e84132507d77b5612072d55801c6586e0e348170dc2"} Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.907853 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerStarted","Data":"35d008bf11c8a99daac62cd5f7dca3f6899e758054f8e0de898ed211086d18b7"} Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.908571 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.910884 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" event={"ID":"dab59711-c8b1-43f3-a608-0f892c43ac60","Type":"ContainerStarted","Data":"9929b325e9b9b62340c449c510af8f6a84bd92060838fc850db7321ee8395a46"} Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.928108 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.942256 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.955952 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.967110 4865 request.go:700] Waited for 1.012094051s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0 Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.969320 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.984047 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:54:51 crc kubenswrapper[4865]: I0123 11:54:51.989143 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.008724 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.014355 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8lsbn"] Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.030042 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.048035 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.055808 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-hrzcb"] Jan 23 11:54:52 crc kubenswrapper[4865]: W0123 11:54:52.066658 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8896518_4b5b_4712_9994_0bb445a3504f.slice/crio-003c64110dc95ea9a3731776f938a32169f5795f4af37f73dad9f88e5cec3504 WatchSource:0}: Error finding container 003c64110dc95ea9a3731776f938a32169f5795f4af37f73dad9f88e5cec3504: Status 404 returned error can't find the container with id 003c64110dc95ea9a3731776f938a32169f5795f4af37f73dad9f88e5cec3504 Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.068138 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.089163 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.109264 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.112424 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc"] Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.132174 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.153539 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.170389 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.174744 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng"] Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.188969 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.208452 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.224130 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88ktq"] Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.228198 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.249193 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.261871 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-r8fk2"] Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.274366 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.287878 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.307673 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.328466 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.348290 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.367963 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.388416 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.408384 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.428829 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.448401 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.467560 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.488343 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.509525 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.529350 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.549221 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.569322 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.588951 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 11:54:52 crc kubenswrapper[4865]: W0123 11:54:52.594095 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5137707_0cd3_4f39_9d73_3401d315e827.slice/crio-a085f89fa9a4377029e49419bcb2192063bd2fa07761cec9dc4daaeb121ac06e WatchSource:0}: Error finding container a085f89fa9a4377029e49419bcb2192063bd2fa07761cec9dc4daaeb121ac06e: Status 404 returned error can't find the container with id a085f89fa9a4377029e49419bcb2192063bd2fa07761cec9dc4daaeb121ac06e Jan 23 11:54:52 crc kubenswrapper[4865]: W0123 11:54:52.594926 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cab2dc0_42b2_4029_8388_b20c287698bc.slice/crio-dcb6d0e2656bbca0198c798f721a1090d59c3fff22e0cd20583260c216d377f1 WatchSource:0}: Error finding container dcb6d0e2656bbca0198c798f721a1090d59c3fff22e0cd20583260c216d377f1: Status 404 returned error can't find the container with id dcb6d0e2656bbca0198c798f721a1090d59c3fff22e0cd20583260c216d377f1 Jan 23 11:54:52 crc kubenswrapper[4865]: W0123 11:54:52.597427 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod131a949e_2d37_47b4_8d7e_1f1e1afb9283.slice/crio-95364e09f96d05bba3f7646b7da419a575c73301f522266f5ea9d82de5c37296 WatchSource:0}: Error finding container 95364e09f96d05bba3f7646b7da419a575c73301f522266f5ea9d82de5c37296: Status 404 returned error can't find the container with id 95364e09f96d05bba3f7646b7da419a575c73301f522266f5ea9d82de5c37296 Jan 23 11:54:52 crc kubenswrapper[4865]: W0123 11:54:52.601487 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51f498e1_f13f_4977_a3e3_ea8bc6b75c6f.slice/crio-3f62a352ccaddab80ae4b14d0a955d47401c8b675445ae49b57d976dd14be2af WatchSource:0}: Error finding container 3f62a352ccaddab80ae4b14d0a955d47401c8b675445ae49b57d976dd14be2af: Status 404 returned error can't find the container with id 3f62a352ccaddab80ae4b14d0a955d47401c8b675445ae49b57d976dd14be2af Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.609891 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.629973 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.653503 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.672374 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.691329 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.711543 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.729512 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.748737 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.769090 4865 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.791113 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.809910 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.829116 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.847840 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.868468 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.896906 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.909346 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.928870 4865 generic.go:334] "Generic (PLEG): container finished" podID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerID="b1a9e188f0ad4b5e3fd39c6bf6b5db420975507ab4198a38c1bdda2dade5d4a1" exitCode=0 Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.929249 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerDied","Data":"b1a9e188f0ad4b5e3fd39c6bf6b5db420975507ab4198a38c1bdda2dade5d4a1"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.936541 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" event={"ID":"d5137707-0cd3-4f39-9d73-3401d315e827","Type":"ContainerStarted","Data":"596ee86f5ece4078ac6a90e14c52b12d7b428a8a8b1967199d42d72f90b8924b"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.936580 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" event={"ID":"d5137707-0cd3-4f39-9d73-3401d315e827","Type":"ContainerStarted","Data":"a085f89fa9a4377029e49419bcb2192063bd2fa07761cec9dc4daaeb121ac06e"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.937052 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.938494 4865 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7ddsc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.938530 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" podUID="d5137707-0cd3-4f39-9d73-3401d315e827" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.941791 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" event={"ID":"131a949e-2d37-47b4-8d7e-1f1e1afb9283","Type":"ContainerStarted","Data":"320d71b41214ea352e0e5a25a063242581a6ab1f64cad04090ad89f0897dea40"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.941835 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" event={"ID":"131a949e-2d37-47b4-8d7e-1f1e1afb9283","Type":"ContainerStarted","Data":"95364e09f96d05bba3f7646b7da419a575c73301f522266f5ea9d82de5c37296"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.942001 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.942902 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" event={"ID":"0cab2dc0-42b2-4029-8388-b20c287698bc","Type":"ContainerStarted","Data":"dcb6d0e2656bbca0198c798f721a1090d59c3fff22e0cd20583260c216d377f1"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.944970 4865 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-88ktq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.945009 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" podUID="131a949e-2d37-47b4-8d7e-1f1e1afb9283" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.948922 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" event={"ID":"cfe7c397-99ae-494d-a418-b0f08568f156","Type":"ContainerStarted","Data":"45ea759c1c5e5541e38c656a91725ebb01f67b53b71f6d6ca75e869cf22a64ba"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.948972 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" event={"ID":"cfe7c397-99ae-494d-a418-b0f08568f156","Type":"ContainerStarted","Data":"e9197bfa14756b80b2ff58b41b99c27924a5308ec09b335fece907e7ebe098fb"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.949650 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.950680 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.950712 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.956168 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" event={"ID":"dab59711-c8b1-43f3-a608-0f892c43ac60","Type":"ContainerStarted","Data":"8327efabbe6ef69e781091b87a635d6c496ae3101a7ac1120d39a522c58d940b"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.956322 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" event={"ID":"dab59711-c8b1-43f3-a608-0f892c43ac60","Type":"ContainerStarted","Data":"0100f6306a858f4ebdebd806e3099fb03d70ff5be412b2521988f5fc509a6999"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.958546 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" event={"ID":"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f","Type":"ContainerStarted","Data":"3f62a352ccaddab80ae4b14d0a955d47401c8b675445ae49b57d976dd14be2af"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.966102 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" event={"ID":"2a66fefc-bc9d-4922-821a-63e84b87e740","Type":"ContainerStarted","Data":"e381b12fad97e3cb2cc01b1415d30f165d877536b6acb0487d868102ad13a690"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.968824 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" event={"ID":"c8896518-4b5b-4712-9994-0bb445a3504f","Type":"ContainerStarted","Data":"9e2d67f5b624196c2ca7a39eb784d04a4289c7a162f39ffd49470ee7ed4b98ed"} Jan 23 11:54:52 crc kubenswrapper[4865]: I0123 11:54:52.968868 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" event={"ID":"c8896518-4b5b-4712-9994-0bb445a3504f","Type":"ContainerStarted","Data":"003c64110dc95ea9a3731776f938a32169f5795f4af37f73dad9f88e5cec3504"} Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.013396 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3fbcdfcf-19cc-46b9-a986-bd9426751459-stats-auth\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.013461 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79e3141-542e-448b-a2fd-2ac6fc6ef33b-config\") pod \"kube-apiserver-operator-766d6c64bb-zv7b2\" (UID: \"e79e3141-542e-448b-a2fd-2ac6fc6ef33b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.013527 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-tls\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.013545 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.013629 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/adc78811-9b09-4d82-bba2-2a63f0c52f7b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n4z5m\" (UID: \"adc78811-9b09-4d82-bba2-2a63f0c52f7b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.013646 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3670e4c8-6d81-4ace-ad05-7c045097b991-serving-cert\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.013662 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79e3141-542e-448b-a2fd-2ac6fc6ef33b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zv7b2\" (UID: \"e79e3141-542e-448b-a2fd-2ac6fc6ef33b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.013725 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-bound-sa-token\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.013742 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfb95e1-29f2-4c22-8ec7-6683cf251601-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v2sh5\" (UID: \"abfb95e1-29f2-4c22-8ec7-6683cf251601\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.014012 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015154 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5kq\" (UniqueName: \"kubernetes.io/projected/9a3c42c2-81aa-404b-ad80-7d534f6a6007-kube-api-access-hr5kq\") pod \"dns-operator-744455d44c-gzd5x\" (UID: \"9a3c42c2-81aa-404b-ad80-7d534f6a6007\") " pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015178 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-serving-cert\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015204 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015229 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nkt6\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-kube-api-access-4nkt6\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015243 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb9c4\" (UniqueName: \"kubernetes.io/projected/3fbcdfcf-19cc-46b9-a986-bd9426751459-kube-api-access-nb9c4\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015275 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015292 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015318 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015343 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3670e4c8-6d81-4ace-ad05-7c045097b991-etcd-client\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015356 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-oauth-config\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015380 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a3c42c2-81aa-404b-ad80-7d534f6a6007-metrics-tls\") pod \"dns-operator-744455d44c-gzd5x\" (UID: \"9a3c42c2-81aa-404b-ad80-7d534f6a6007\") " pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015395 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015410 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015426 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7zdb\" (UniqueName: \"kubernetes.io/projected/adc78811-9b09-4d82-bba2-2a63f0c52f7b-kube-api-access-t7zdb\") pod \"machine-config-controller-84d6567774-n4z5m\" (UID: \"adc78811-9b09-4d82-bba2-2a63f0c52f7b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015449 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3670e4c8-6d81-4ace-ad05-7c045097b991-etcd-ca\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015521 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3fbcdfcf-19cc-46b9-a986-bd9426751459-default-certificate\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015540 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015565 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-dir\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015587 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-trusted-ca\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015617 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfb95e1-29f2-4c22-8ec7-6683cf251601-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v2sh5\" (UID: \"abfb95e1-29f2-4c22-8ec7-6683cf251601\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015638 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015653 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k665n\" (UniqueName: \"kubernetes.io/projected/34e16446-9445-4646-bf3b-08764f77f949-kube-api-access-k665n\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015671 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d374fa65-2538-47f8-abc5-5f7eac853d58-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xhktr\" (UID: \"d374fa65-2538-47f8-abc5-5f7eac853d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015687 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7e3b68e-b5c0-4446-9d59-39be6a478326-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015702 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e79e3141-542e-448b-a2fd-2ac6fc6ef33b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zv7b2\" (UID: \"e79e3141-542e-448b-a2fd-2ac6fc6ef33b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015738 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-policies\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015754 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015768 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbpw\" (UniqueName: \"kubernetes.io/projected/abfb95e1-29f2-4c22-8ec7-6683cf251601-kube-api-access-zcbpw\") pod \"openshift-controller-manager-operator-756b6f6bc6-v2sh5\" (UID: \"abfb95e1-29f2-4c22-8ec7-6683cf251601\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015781 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3670e4c8-6d81-4ace-ad05-7c045097b991-etcd-service-ca\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015815 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5z79\" (UniqueName: \"kubernetes.io/projected/bdee5ba9-99e1-495c-9b52-f670cbbffea2-kube-api-access-j5z79\") pod \"downloads-7954f5f757-48b72\" (UID: \"bdee5ba9-99e1-495c-9b52-f670cbbffea2\") " pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015828 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fbcdfcf-19cc-46b9-a986-bd9426751459-service-ca-bundle\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015844 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d374fa65-2538-47f8-abc5-5f7eac853d58-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xhktr\" (UID: \"d374fa65-2538-47f8-abc5-5f7eac853d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015859 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3670e4c8-6d81-4ace-ad05-7c045097b991-config\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015874 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/03a35268-fb83-4d1b-8880-ed275cc23052-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xfxrn\" (UID: \"03a35268-fb83-4d1b-8880-ed275cc23052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015890 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fbcdfcf-19cc-46b9-a986-bd9426751459-metrics-certs\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015906 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxs7x\" (UniqueName: \"kubernetes.io/projected/d374fa65-2538-47f8-abc5-5f7eac853d58-kube-api-access-fxs7x\") pod \"openshift-apiserver-operator-796bbdcf4f-xhktr\" (UID: \"d374fa65-2538-47f8-abc5-5f7eac853d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015932 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015946 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015962 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr2p5\" (UniqueName: \"kubernetes.io/projected/3670e4c8-6d81-4ace-ad05-7c045097b991-kube-api-access-dr2p5\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.015977 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7e3b68e-b5c0-4446-9d59-39be6a478326-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.016021 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xnfn\" (UniqueName: \"kubernetes.io/projected/67fa6cdb-c380-4b05-a05d-9df4a4b86019-kube-api-access-7xnfn\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.017232 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-trusted-ca-bundle\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.017270 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-oauth-serving-cert\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.017293 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-certificates\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.017315 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/adc78811-9b09-4d82-bba2-2a63f0c52f7b-proxy-tls\") pod \"machine-config-controller-84d6567774-n4z5m\" (UID: \"adc78811-9b09-4d82-bba2-2a63f0c52f7b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.017361 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-service-ca\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.017557 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7e3b68e-b5c0-4446-9d59-39be6a478326-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.017869 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59qqp\" (UniqueName: \"kubernetes.io/projected/03a35268-fb83-4d1b-8880-ed275cc23052-kube-api-access-59qqp\") pod \"cluster-samples-operator-665b6dd947-xfxrn\" (UID: \"03a35268-fb83-4d1b-8880-ed275cc23052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.017938 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxrj2\" (UniqueName: \"kubernetes.io/projected/b7e3b68e-b5c0-4446-9d59-39be6a478326-kube-api-access-vxrj2\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.018169 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.018234 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-console-config\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.018436 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:53.518424957 +0000 UTC m=+137.687497183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119074 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.119249 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:53.619220689 +0000 UTC m=+137.788292915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119310 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-oauth-serving-cert\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119331 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-certificates\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119347 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/adc78811-9b09-4d82-bba2-2a63f0c52f7b-proxy-tls\") pod \"machine-config-controller-84d6567774-n4z5m\" (UID: \"adc78811-9b09-4d82-bba2-2a63f0c52f7b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119361 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-service-ca\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119389 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62eff2ee-82ac-4672-9109-3a72c02f32e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdt9c\" (UID: \"62eff2ee-82ac-4672-9109-3a72c02f32e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119405 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3e0d8d02-d114-4cc4-9a04-823669e39fa2-signing-key\") pod \"service-ca-9c57cc56f-8mx4s\" (UID: \"3e0d8d02-d114-4cc4-9a04-823669e39fa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119423 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-plugins-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119442 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59qqp\" (UniqueName: \"kubernetes.io/projected/03a35268-fb83-4d1b-8880-ed275cc23052-kube-api-access-59qqp\") pod \"cluster-samples-operator-665b6dd947-xfxrn\" (UID: \"03a35268-fb83-4d1b-8880-ed275cc23052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119468 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxrj2\" (UniqueName: \"kubernetes.io/projected/b7e3b68e-b5c0-4446-9d59-39be6a478326-kube-api-access-vxrj2\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119487 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsb9q\" (UniqueName: \"kubernetes.io/projected/d3d080f1-896a-4b64-8ff3-05db0fd12be3-kube-api-access-tsb9q\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119503 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-tls\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119519 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119535 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3670e4c8-6d81-4ace-ad05-7c045097b991-serving-cert\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119554 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5kq\" (UniqueName: \"kubernetes.io/projected/9a3c42c2-81aa-404b-ad80-7d534f6a6007-kube-api-access-hr5kq\") pod \"dns-operator-744455d44c-gzd5x\" (UID: \"9a3c42c2-81aa-404b-ad80-7d534f6a6007\") " pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119569 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119587 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zt6t\" (UniqueName: \"kubernetes.io/projected/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-kube-api-access-6zt6t\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119625 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nkt6\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-kube-api-access-4nkt6\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119642 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nb9c4\" (UniqueName: \"kubernetes.io/projected/3fbcdfcf-19cc-46b9-a986-bd9426751459-kube-api-access-nb9c4\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119659 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3e0d8d02-d114-4cc4-9a04-823669e39fa2-signing-cabundle\") pod \"service-ca-9c57cc56f-8mx4s\" (UID: \"3e0d8d02-d114-4cc4-9a04-823669e39fa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119686 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/843c383b-053f-42f5-88ce-7a216f5354a3-srv-cert\") pod \"catalog-operator-68c6474976-42cdm\" (UID: \"843c383b-053f-42f5-88ce-7a216f5354a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119703 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119718 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119753 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff3d137-2611-43f9-9825-4839a271fc69-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zbtbp\" (UID: \"4ff3d137-2611-43f9-9825-4839a271fc69\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119775 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v684\" (UniqueName: \"kubernetes.io/projected/2c1ba660-8691-49e2-b0cc-056355d82f4c-kube-api-access-2v684\") pod \"package-server-manager-789f6589d5-4g249\" (UID: \"2c1ba660-8691-49e2-b0cc-056355d82f4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119804 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119822 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7zdb\" (UniqueName: \"kubernetes.io/projected/adc78811-9b09-4d82-bba2-2a63f0c52f7b-kube-api-access-t7zdb\") pod \"machine-config-controller-84d6567774-n4z5m\" (UID: \"adc78811-9b09-4d82-bba2-2a63f0c52f7b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119839 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3670e4c8-6d81-4ace-ad05-7c045097b991-etcd-ca\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119858 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkf4m\" (UniqueName: \"kubernetes.io/projected/bbdd3e92-1864-4a2b-9284-720a4813247a-kube-api-access-vkf4m\") pod \"machine-config-server-cshcx\" (UID: \"bbdd3e92-1864-4a2b-9284-720a4813247a\") " pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119878 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a3c42c2-81aa-404b-ad80-7d534f6a6007-metrics-tls\") pod \"dns-operator-744455d44c-gzd5x\" (UID: \"9a3c42c2-81aa-404b-ad80-7d534f6a6007\") " pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119911 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bbdd3e92-1864-4a2b-9284-720a4813247a-node-bootstrap-token\") pod \"machine-config-server-cshcx\" (UID: \"bbdd3e92-1864-4a2b-9284-720a4813247a\") " pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119928 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj5tk\" (UniqueName: \"kubernetes.io/projected/752d7a7c-100b-4c07-a601-96309d9e4a33-kube-api-access-bj5tk\") pod \"multus-admission-controller-857f4d67dd-78x6m\" (UID: \"752d7a7c-100b-4c07-a601-96309d9e4a33\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119947 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7t2x\" (UniqueName: \"kubernetes.io/projected/8b1224b1-f7b9-48de-9842-9c0c91f4d96a-kube-api-access-g7t2x\") pod \"migrator-59844c95c7-tb57w\" (UID: \"8b1224b1-f7b9-48de-9842-9c0c91f4d96a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119965 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/582f83b4-97dc-4f56-9879-c73fab80488a-srv-cert\") pod \"olm-operator-6b444d44fb-g5xkl\" (UID: \"582f83b4-97dc-4f56-9879-c73fab80488a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.119984 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3d080f1-896a-4b64-8ff3-05db0fd12be3-metrics-tls\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120010 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-images\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120027 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-csi-data-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120055 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-dir\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120070 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-trusted-ca\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120084 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfb95e1-29f2-4c22-8ec7-6683cf251601-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v2sh5\" (UID: \"abfb95e1-29f2-4c22-8ec7-6683cf251601\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120102 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/752d7a7c-100b-4c07-a601-96309d9e4a33-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-78x6m\" (UID: \"752d7a7c-100b-4c07-a601-96309d9e4a33\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120118 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9acb5d27-9286-4d3d-9e36-237482223717-config\") pod \"service-ca-operator-777779d784-7hq88\" (UID: \"9acb5d27-9286-4d3d-9e36-237482223717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d080f1-896a-4b64-8ff3-05db0fd12be3-trusted-ca\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120151 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d374fa65-2538-47f8-abc5-5f7eac853d58-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xhktr\" (UID: \"d374fa65-2538-47f8-abc5-5f7eac853d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120166 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7e3b68e-b5c0-4446-9d59-39be6a478326-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120182 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e79e3141-542e-448b-a2fd-2ac6fc6ef33b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zv7b2\" (UID: \"e79e3141-542e-448b-a2fd-2ac6fc6ef33b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120198 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn69j\" (UniqueName: \"kubernetes.io/projected/401d6c1a-be67-4fb7-97f6-d46e3ba35960-kube-api-access-zn69j\") pod \"marketplace-operator-79b997595-mwzzv\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120216 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/843c383b-053f-42f5-88ce-7a216f5354a3-profile-collector-cert\") pod \"catalog-operator-68c6474976-42cdm\" (UID: \"843c383b-053f-42f5-88ce-7a216f5354a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120232 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-policies\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120249 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-registration-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120264 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/582f83b4-97dc-4f56-9879-c73fab80488a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g5xkl\" (UID: \"582f83b4-97dc-4f56-9879-c73fab80488a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120278 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2699af1d-57a0-4ce2-9550-b423f9eafc0f-tmpfs\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120280 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-service-ca\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120295 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3670e4c8-6d81-4ace-ad05-7c045097b991-config\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120357 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5z79\" (UniqueName: \"kubernetes.io/projected/bdee5ba9-99e1-495c-9b52-f670cbbffea2-kube-api-access-j5z79\") pod \"downloads-7954f5f757-48b72\" (UID: \"bdee5ba9-99e1-495c-9b52-f670cbbffea2\") " pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120379 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/03a35268-fb83-4d1b-8880-ed275cc23052-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xfxrn\" (UID: \"03a35268-fb83-4d1b-8880-ed275cc23052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120397 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fbcdfcf-19cc-46b9-a986-bd9426751459-metrics-certs\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120416 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-config-volume\") pod \"collect-profiles-29486145-rzp9s\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120425 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-certificates\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120444 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120461 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr2p5\" (UniqueName: \"kubernetes.io/projected/3670e4c8-6d81-4ace-ad05-7c045097b991-kube-api-access-dr2p5\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120477 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99ztv\" (UniqueName: \"kubernetes.io/projected/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-kube-api-access-99ztv\") pod \"collect-profiles-29486145-rzp9s\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120492 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bbdd3e92-1864-4a2b-9284-720a4813247a-certs\") pod \"machine-config-server-cshcx\" (UID: \"bbdd3e92-1864-4a2b-9284-720a4813247a\") " pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120522 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgs4\" (UniqueName: \"kubernetes.io/projected/2699af1d-57a0-4ce2-9550-b423f9eafc0f-kube-api-access-vpgs4\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120538 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mwzzv\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120558 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1ba660-8691-49e2-b0cc-056355d82f4c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4g249\" (UID: \"2c1ba660-8691-49e2-b0cc-056355d82f4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120575 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-trusted-ca-bundle\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120594 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-mountpoint-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120634 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e265134c-b7db-4575-a84b-bc2c6806fffb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w8js8\" (UID: \"e265134c-b7db-4575-a84b-bc2c6806fffb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120651 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjqdh\" (UniqueName: \"kubernetes.io/projected/3e0d8d02-d114-4cc4-9a04-823669e39fa2-kube-api-access-zjqdh\") pod \"service-ca-9c57cc56f-8mx4s\" (UID: \"3e0d8d02-d114-4cc4-9a04-823669e39fa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120668 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7e3b68e-b5c0-4446-9d59-39be6a478326-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120689 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzqq2\" (UniqueName: \"kubernetes.io/projected/582f83b4-97dc-4f56-9879-c73fab80488a-kube-api-access-tzqq2\") pod \"olm-operator-6b444d44fb-g5xkl\" (UID: \"582f83b4-97dc-4f56-9879-c73fab80488a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120706 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds5k7\" (UniqueName: \"kubernetes.io/projected/4ff3d137-2611-43f9-9825-4839a271fc69-kube-api-access-ds5k7\") pod \"kube-storage-version-migrator-operator-b67b599dd-zbtbp\" (UID: \"4ff3d137-2611-43f9-9825-4839a271fc69\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120753 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120770 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-console-config\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120796 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79e3141-542e-448b-a2fd-2ac6fc6ef33b-config\") pod \"kube-apiserver-operator-766d6c64bb-zv7b2\" (UID: \"e79e3141-542e-448b-a2fd-2ac6fc6ef33b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120813 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3fbcdfcf-19cc-46b9-a986-bd9426751459-stats-auth\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120831 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/adc78811-9b09-4d82-bba2-2a63f0c52f7b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n4z5m\" (UID: \"adc78811-9b09-4d82-bba2-2a63f0c52f7b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120847 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79e3141-542e-448b-a2fd-2ac6fc6ef33b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zv7b2\" (UID: \"e79e3141-542e-448b-a2fd-2ac6fc6ef33b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120862 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120879 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-bound-sa-token\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120883 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3670e4c8-6d81-4ace-ad05-7c045097b991-config\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120896 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfb95e1-29f2-4c22-8ec7-6683cf251601-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v2sh5\" (UID: \"abfb95e1-29f2-4c22-8ec7-6683cf251601\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120915 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ff3d137-2611-43f9-9825-4839a271fc69-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zbtbp\" (UID: \"4ff3d137-2611-43f9-9825-4839a271fc69\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120939 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a8ae231-47bc-49ee-8413-de5a08c05d08-config-volume\") pod \"dns-default-z28b7\" (UID: \"4a8ae231-47bc-49ee-8413-de5a08c05d08\") " pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120967 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4jvq\" (UniqueName: \"kubernetes.io/projected/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-kube-api-access-k4jvq\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120983 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.120999 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-serving-cert\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121035 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-socket-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121063 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121078 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-proxy-tls\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121097 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3670e4c8-6d81-4ace-ad05-7c045097b991-etcd-client\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121112 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-oauth-config\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121131 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121147 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-729dk\" (UniqueName: \"kubernetes.io/projected/41b1eece-7199-4214-9add-15fd7c3039c7-kube-api-access-729dk\") pod \"control-plane-machine-set-operator-78cbb6b69f-25dzs\" (UID: \"41b1eece-7199-4214-9add-15fd7c3039c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121164 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcgs9\" (UniqueName: \"kubernetes.io/projected/4a8ae231-47bc-49ee-8413-de5a08c05d08-kube-api-access-zcgs9\") pod \"dns-default-z28b7\" (UID: \"4a8ae231-47bc-49ee-8413-de5a08c05d08\") " pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121179 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgnt8\" (UniqueName: \"kubernetes.io/projected/e1f4986a-71c2-4cba-a049-8f1ea07cfd17-kube-api-access-sgnt8\") pod \"ingress-canary-bx4k5\" (UID: \"e1f4986a-71c2-4cba-a049-8f1ea07cfd17\") " pod="openshift-ingress-canary/ingress-canary-bx4k5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121197 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/41b1eece-7199-4214-9add-15fd7c3039c7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-25dzs\" (UID: \"41b1eece-7199-4214-9add-15fd7c3039c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121212 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a8ae231-47bc-49ee-8413-de5a08c05d08-metrics-tls\") pod \"dns-default-z28b7\" (UID: \"4a8ae231-47bc-49ee-8413-de5a08c05d08\") " pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121238 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3fbcdfcf-19cc-46b9-a986-bd9426751459-default-certificate\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121254 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-secret-volume\") pod \"collect-profiles-29486145-rzp9s\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121270 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d080f1-896a-4b64-8ff3-05db0fd12be3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121289 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121306 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e1f4986a-71c2-4cba-a049-8f1ea07cfd17-cert\") pod \"ingress-canary-bx4k5\" (UID: \"e1f4986a-71c2-4cba-a049-8f1ea07cfd17\") " pod="openshift-ingress-canary/ingress-canary-bx4k5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121321 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mwzzv\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121336 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9acb5d27-9286-4d3d-9e36-237482223717-serving-cert\") pod \"service-ca-operator-777779d784-7hq88\" (UID: \"9acb5d27-9286-4d3d-9e36-237482223717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121352 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cflks\" (UniqueName: \"kubernetes.io/projected/843c383b-053f-42f5-88ce-7a216f5354a3-kube-api-access-cflks\") pod \"catalog-operator-68c6474976-42cdm\" (UID: \"843c383b-053f-42f5-88ce-7a216f5354a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121369 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e265134c-b7db-4575-a84b-bc2c6806fffb-config\") pod \"kube-controller-manager-operator-78b949d7b-w8js8\" (UID: \"e265134c-b7db-4575-a84b-bc2c6806fffb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121384 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e265134c-b7db-4575-a84b-bc2c6806fffb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w8js8\" (UID: \"e265134c-b7db-4575-a84b-bc2c6806fffb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121415 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121444 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k665n\" (UniqueName: \"kubernetes.io/projected/34e16446-9445-4646-bf3b-08764f77f949-kube-api-access-k665n\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121463 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121482 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcbpw\" (UniqueName: \"kubernetes.io/projected/abfb95e1-29f2-4c22-8ec7-6683cf251601-kube-api-access-zcbpw\") pod \"openshift-controller-manager-operator-756b6f6bc6-v2sh5\" (UID: \"abfb95e1-29f2-4c22-8ec7-6683cf251601\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121499 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3670e4c8-6d81-4ace-ad05-7c045097b991-etcd-service-ca\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121516 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2699af1d-57a0-4ce2-9550-b423f9eafc0f-webhook-cert\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121543 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsqld\" (UniqueName: \"kubernetes.io/projected/9acb5d27-9286-4d3d-9e36-237482223717-kube-api-access-xsqld\") pod \"service-ca-operator-777779d784-7hq88\" (UID: \"9acb5d27-9286-4d3d-9e36-237482223717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121561 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fbcdfcf-19cc-46b9-a986-bd9426751459-service-ca-bundle\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121578 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d374fa65-2538-47f8-abc5-5f7eac853d58-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xhktr\" (UID: \"d374fa65-2538-47f8-abc5-5f7eac853d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121616 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxs7x\" (UniqueName: \"kubernetes.io/projected/d374fa65-2538-47f8-abc5-5f7eac853d58-kube-api-access-fxs7x\") pod \"openshift-apiserver-operator-796bbdcf4f-xhktr\" (UID: \"d374fa65-2538-47f8-abc5-5f7eac853d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121634 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2699af1d-57a0-4ce2-9550-b423f9eafc0f-apiservice-cert\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121651 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121669 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7e3b68e-b5c0-4446-9d59-39be6a478326-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121687 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62eff2ee-82ac-4672-9109-3a72c02f32e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdt9c\" (UID: \"62eff2ee-82ac-4672-9109-3a72c02f32e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121704 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xnfn\" (UniqueName: \"kubernetes.io/projected/67fa6cdb-c380-4b05-a05d-9df4a4b86019-kube-api-access-7xnfn\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.121721 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62eff2ee-82ac-4672-9109-3a72c02f32e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdt9c\" (UID: \"62eff2ee-82ac-4672-9109-3a72c02f32e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.122395 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-dir\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.122646 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-oauth-serving-cert\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.124002 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-trusted-ca\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.124282 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b7e3b68e-b5c0-4446-9d59-39be6a478326-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.124865 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e79e3141-542e-448b-a2fd-2ac6fc6ef33b-config\") pod \"kube-apiserver-operator-766d6c64bb-zv7b2\" (UID: \"e79e3141-542e-448b-a2fd-2ac6fc6ef33b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.125677 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:53.625660582 +0000 UTC m=+137.794732878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.125914 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfb95e1-29f2-4c22-8ec7-6683cf251601-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-v2sh5\" (UID: \"abfb95e1-29f2-4c22-8ec7-6683cf251601\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.126820 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-console-config\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.127124 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fbcdfcf-19cc-46b9-a986-bd9426751459-service-ca-bundle\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.127681 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.127909 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d374fa65-2538-47f8-abc5-5f7eac853d58-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xhktr\" (UID: \"d374fa65-2538-47f8-abc5-5f7eac853d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.129737 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a3c42c2-81aa-404b-ad80-7d534f6a6007-metrics-tls\") pod \"dns-operator-744455d44c-gzd5x\" (UID: \"9a3c42c2-81aa-404b-ad80-7d534f6a6007\") " pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.129835 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.131287 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.132391 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/adc78811-9b09-4d82-bba2-2a63f0c52f7b-proxy-tls\") pod \"machine-config-controller-84d6567774-n4z5m\" (UID: \"adc78811-9b09-4d82-bba2-2a63f0c52f7b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.132921 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-trusted-ca-bundle\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.133065 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.133928 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/03a35268-fb83-4d1b-8880-ed275cc23052-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-xfxrn\" (UID: \"03a35268-fb83-4d1b-8880-ed275cc23052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.133971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/adc78811-9b09-4d82-bba2-2a63f0c52f7b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-n4z5m\" (UID: \"adc78811-9b09-4d82-bba2-2a63f0c52f7b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.134996 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-oauth-config\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.135429 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3670e4c8-6d81-4ace-ad05-7c045097b991-etcd-service-ca\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.135538 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-serving-cert\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.136119 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-policies\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.136317 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.137687 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.138236 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b7e3b68e-b5c0-4446-9d59-39be6a478326-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.139466 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3670e4c8-6d81-4ace-ad05-7c045097b991-etcd-ca\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.139943 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.140151 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.140842 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-tls\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.141715 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.143616 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.143992 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d374fa65-2538-47f8-abc5-5f7eac853d58-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xhktr\" (UID: \"d374fa65-2538-47f8-abc5-5f7eac853d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.144753 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e79e3141-542e-448b-a2fd-2ac6fc6ef33b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zv7b2\" (UID: \"e79e3141-542e-448b-a2fd-2ac6fc6ef33b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.144755 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3670e4c8-6d81-4ace-ad05-7c045097b991-etcd-client\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.145039 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfb95e1-29f2-4c22-8ec7-6683cf251601-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-v2sh5\" (UID: \"abfb95e1-29f2-4c22-8ec7-6683cf251601\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.145520 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.146041 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3670e4c8-6d81-4ace-ad05-7c045097b991-serving-cert\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.147113 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/3fbcdfcf-19cc-46b9-a986-bd9426751459-default-certificate\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.148372 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.149620 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/3fbcdfcf-19cc-46b9-a986-bd9426751459-stats-auth\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.165445 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3fbcdfcf-19cc-46b9-a986-bd9426751459-metrics-certs\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.165572 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.191432 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5z79\" (UniqueName: \"kubernetes.io/projected/bdee5ba9-99e1-495c-9b52-f670cbbffea2-kube-api-access-j5z79\") pod \"downloads-7954f5f757-48b72\" (UID: \"bdee5ba9-99e1-495c-9b52-f670cbbffea2\") " pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.206106 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59qqp\" (UniqueName: \"kubernetes.io/projected/03a35268-fb83-4d1b-8880-ed275cc23052-kube-api-access-59qqp\") pod \"cluster-samples-operator-665b6dd947-xfxrn\" (UID: \"03a35268-fb83-4d1b-8880-ed275cc23052\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.210815 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxrj2\" (UniqueName: \"kubernetes.io/projected/b7e3b68e-b5c0-4446-9d59-39be6a478326-kube-api-access-vxrj2\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.222936 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.223151 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:53.723115744 +0000 UTC m=+137.892187970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223232 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4jvq\" (UniqueName: \"kubernetes.io/projected/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-kube-api-access-k4jvq\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223279 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a8ae231-47bc-49ee-8413-de5a08c05d08-config-volume\") pod \"dns-default-z28b7\" (UID: \"4a8ae231-47bc-49ee-8413-de5a08c05d08\") " pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223307 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-socket-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223332 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-proxy-tls\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223359 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-729dk\" (UniqueName: \"kubernetes.io/projected/41b1eece-7199-4214-9add-15fd7c3039c7-kube-api-access-729dk\") pod \"control-plane-machine-set-operator-78cbb6b69f-25dzs\" (UID: \"41b1eece-7199-4214-9add-15fd7c3039c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223380 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcgs9\" (UniqueName: \"kubernetes.io/projected/4a8ae231-47bc-49ee-8413-de5a08c05d08-kube-api-access-zcgs9\") pod \"dns-default-z28b7\" (UID: \"4a8ae231-47bc-49ee-8413-de5a08c05d08\") " pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223409 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgnt8\" (UniqueName: \"kubernetes.io/projected/e1f4986a-71c2-4cba-a049-8f1ea07cfd17-kube-api-access-sgnt8\") pod \"ingress-canary-bx4k5\" (UID: \"e1f4986a-71c2-4cba-a049-8f1ea07cfd17\") " pod="openshift-ingress-canary/ingress-canary-bx4k5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223428 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/41b1eece-7199-4214-9add-15fd7c3039c7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-25dzs\" (UID: \"41b1eece-7199-4214-9add-15fd7c3039c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223443 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a8ae231-47bc-49ee-8413-de5a08c05d08-metrics-tls\") pod \"dns-default-z28b7\" (UID: \"4a8ae231-47bc-49ee-8413-de5a08c05d08\") " pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223465 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-secret-volume\") pod \"collect-profiles-29486145-rzp9s\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223482 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d080f1-896a-4b64-8ff3-05db0fd12be3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223501 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cflks\" (UniqueName: \"kubernetes.io/projected/843c383b-053f-42f5-88ce-7a216f5354a3-kube-api-access-cflks\") pod \"catalog-operator-68c6474976-42cdm\" (UID: \"843c383b-053f-42f5-88ce-7a216f5354a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223518 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e1f4986a-71c2-4cba-a049-8f1ea07cfd17-cert\") pod \"ingress-canary-bx4k5\" (UID: \"e1f4986a-71c2-4cba-a049-8f1ea07cfd17\") " pod="openshift-ingress-canary/ingress-canary-bx4k5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223533 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mwzzv\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223550 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9acb5d27-9286-4d3d-9e36-237482223717-serving-cert\") pod \"service-ca-operator-777779d784-7hq88\" (UID: \"9acb5d27-9286-4d3d-9e36-237482223717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223568 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e265134c-b7db-4575-a84b-bc2c6806fffb-config\") pod \"kube-controller-manager-operator-78b949d7b-w8js8\" (UID: \"e265134c-b7db-4575-a84b-bc2c6806fffb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223586 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e265134c-b7db-4575-a84b-bc2c6806fffb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w8js8\" (UID: \"e265134c-b7db-4575-a84b-bc2c6806fffb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223660 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2699af1d-57a0-4ce2-9550-b423f9eafc0f-webhook-cert\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223690 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsqld\" (UniqueName: \"kubernetes.io/projected/9acb5d27-9286-4d3d-9e36-237482223717-kube-api-access-xsqld\") pod \"service-ca-operator-777779d784-7hq88\" (UID: \"9acb5d27-9286-4d3d-9e36-237482223717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223722 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2699af1d-57a0-4ce2-9550-b423f9eafc0f-apiservice-cert\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223747 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62eff2ee-82ac-4672-9109-3a72c02f32e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdt9c\" (UID: \"62eff2ee-82ac-4672-9109-3a72c02f32e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223770 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62eff2ee-82ac-4672-9109-3a72c02f32e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdt9c\" (UID: \"62eff2ee-82ac-4672-9109-3a72c02f32e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223767 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-socket-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223802 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3e0d8d02-d114-4cc4-9a04-823669e39fa2-signing-key\") pod \"service-ca-9c57cc56f-8mx4s\" (UID: \"3e0d8d02-d114-4cc4-9a04-823669e39fa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223899 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62eff2ee-82ac-4672-9109-3a72c02f32e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdt9c\" (UID: \"62eff2ee-82ac-4672-9109-3a72c02f32e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.223949 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-plugins-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224006 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsb9q\" (UniqueName: \"kubernetes.io/projected/d3d080f1-896a-4b64-8ff3-05db0fd12be3-kube-api-access-tsb9q\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224049 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zt6t\" (UniqueName: \"kubernetes.io/projected/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-kube-api-access-6zt6t\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224098 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3e0d8d02-d114-4cc4-9a04-823669e39fa2-signing-cabundle\") pod \"service-ca-9c57cc56f-8mx4s\" (UID: \"3e0d8d02-d114-4cc4-9a04-823669e39fa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224128 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/843c383b-053f-42f5-88ce-7a216f5354a3-srv-cert\") pod \"catalog-operator-68c6474976-42cdm\" (UID: \"843c383b-053f-42f5-88ce-7a216f5354a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224170 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff3d137-2611-43f9-9825-4839a271fc69-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zbtbp\" (UID: \"4ff3d137-2611-43f9-9825-4839a271fc69\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224203 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v684\" (UniqueName: \"kubernetes.io/projected/2c1ba660-8691-49e2-b0cc-056355d82f4c-kube-api-access-2v684\") pod \"package-server-manager-789f6589d5-4g249\" (UID: \"2c1ba660-8691-49e2-b0cc-056355d82f4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224242 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkf4m\" (UniqueName: \"kubernetes.io/projected/bbdd3e92-1864-4a2b-9284-720a4813247a-kube-api-access-vkf4m\") pod \"machine-config-server-cshcx\" (UID: \"bbdd3e92-1864-4a2b-9284-720a4813247a\") " pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224268 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bbdd3e92-1864-4a2b-9284-720a4813247a-node-bootstrap-token\") pod \"machine-config-server-cshcx\" (UID: \"bbdd3e92-1864-4a2b-9284-720a4813247a\") " pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224295 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj5tk\" (UniqueName: \"kubernetes.io/projected/752d7a7c-100b-4c07-a601-96309d9e4a33-kube-api-access-bj5tk\") pod \"multus-admission-controller-857f4d67dd-78x6m\" (UID: \"752d7a7c-100b-4c07-a601-96309d9e4a33\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224327 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7t2x\" (UniqueName: \"kubernetes.io/projected/8b1224b1-f7b9-48de-9842-9c0c91f4d96a-kube-api-access-g7t2x\") pod \"migrator-59844c95c7-tb57w\" (UID: \"8b1224b1-f7b9-48de-9842-9c0c91f4d96a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224374 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/582f83b4-97dc-4f56-9879-c73fab80488a-srv-cert\") pod \"olm-operator-6b444d44fb-g5xkl\" (UID: \"582f83b4-97dc-4f56-9879-c73fab80488a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224395 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3d080f1-896a-4b64-8ff3-05db0fd12be3-metrics-tls\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224422 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-images\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224449 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-csi-data-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224478 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9acb5d27-9286-4d3d-9e36-237482223717-config\") pod \"service-ca-operator-777779d784-7hq88\" (UID: \"9acb5d27-9286-4d3d-9e36-237482223717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224500 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d080f1-896a-4b64-8ff3-05db0fd12be3-trusted-ca\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224528 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/752d7a7c-100b-4c07-a601-96309d9e4a33-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-78x6m\" (UID: \"752d7a7c-100b-4c07-a601-96309d9e4a33\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224558 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn69j\" (UniqueName: \"kubernetes.io/projected/401d6c1a-be67-4fb7-97f6-d46e3ba35960-kube-api-access-zn69j\") pod \"marketplace-operator-79b997595-mwzzv\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224589 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/843c383b-053f-42f5-88ce-7a216f5354a3-profile-collector-cert\") pod \"catalog-operator-68c6474976-42cdm\" (UID: \"843c383b-053f-42f5-88ce-7a216f5354a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224636 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-registration-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224661 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/582f83b4-97dc-4f56-9879-c73fab80488a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g5xkl\" (UID: \"582f83b4-97dc-4f56-9879-c73fab80488a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224684 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2699af1d-57a0-4ce2-9550-b423f9eafc0f-tmpfs\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224710 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-config-volume\") pod \"collect-profiles-29486145-rzp9s\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224754 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99ztv\" (UniqueName: \"kubernetes.io/projected/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-kube-api-access-99ztv\") pod \"collect-profiles-29486145-rzp9s\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224778 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bbdd3e92-1864-4a2b-9284-720a4813247a-certs\") pod \"machine-config-server-cshcx\" (UID: \"bbdd3e92-1864-4a2b-9284-720a4813247a\") " pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224802 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mwzzv\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224830 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1ba660-8691-49e2-b0cc-056355d82f4c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4g249\" (UID: \"2c1ba660-8691-49e2-b0cc-056355d82f4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224858 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpgs4\" (UniqueName: \"kubernetes.io/projected/2699af1d-57a0-4ce2-9550-b423f9eafc0f-kube-api-access-vpgs4\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224885 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjqdh\" (UniqueName: \"kubernetes.io/projected/3e0d8d02-d114-4cc4-9a04-823669e39fa2-kube-api-access-zjqdh\") pod \"service-ca-9c57cc56f-8mx4s\" (UID: \"3e0d8d02-d114-4cc4-9a04-823669e39fa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224909 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-mountpoint-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224933 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e265134c-b7db-4575-a84b-bc2c6806fffb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w8js8\" (UID: \"e265134c-b7db-4575-a84b-bc2c6806fffb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224961 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzqq2\" (UniqueName: \"kubernetes.io/projected/582f83b4-97dc-4f56-9879-c73fab80488a-kube-api-access-tzqq2\") pod \"olm-operator-6b444d44fb-g5xkl\" (UID: \"582f83b4-97dc-4f56-9879-c73fab80488a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.224986 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds5k7\" (UniqueName: \"kubernetes.io/projected/4ff3d137-2611-43f9-9825-4839a271fc69-kube-api-access-ds5k7\") pod \"kube-storage-version-migrator-operator-b67b599dd-zbtbp\" (UID: \"4ff3d137-2611-43f9-9825-4839a271fc69\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.225023 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.225057 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.225090 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ff3d137-2611-43f9-9825-4839a271fc69-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zbtbp\" (UID: \"4ff3d137-2611-43f9-9825-4839a271fc69\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.225918 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ff3d137-2611-43f9-9825-4839a271fc69-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-zbtbp\" (UID: \"4ff3d137-2611-43f9-9825-4839a271fc69\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.226137 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-plugins-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.226253 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-proxy-tls\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.227427 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3e0d8d02-d114-4cc4-9a04-823669e39fa2-signing-key\") pod \"service-ca-9c57cc56f-8mx4s\" (UID: \"3e0d8d02-d114-4cc4-9a04-823669e39fa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.230595 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2699af1d-57a0-4ce2-9550-b423f9eafc0f-webhook-cert\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.231296 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3e0d8d02-d114-4cc4-9a04-823669e39fa2-signing-cabundle\") pod \"service-ca-9c57cc56f-8mx4s\" (UID: \"3e0d8d02-d114-4cc4-9a04-823669e39fa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.231523 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-mountpoint-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.232704 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a8ae231-47bc-49ee-8413-de5a08c05d08-config-volume\") pod \"dns-default-z28b7\" (UID: \"4a8ae231-47bc-49ee-8413-de5a08c05d08\") " pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.233615 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9acb5d27-9286-4d3d-9e36-237482223717-serving-cert\") pod \"service-ca-operator-777779d784-7hq88\" (UID: \"9acb5d27-9286-4d3d-9e36-237482223717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.234135 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e265134c-b7db-4575-a84b-bc2c6806fffb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-w8js8\" (UID: \"e265134c-b7db-4575-a84b-bc2c6806fffb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.234931 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:53.734909965 +0000 UTC m=+137.903982431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.235524 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.239652 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e265134c-b7db-4575-a84b-bc2c6806fffb-config\") pod \"kube-controller-manager-operator-78b949d7b-w8js8\" (UID: \"e265134c-b7db-4575-a84b-bc2c6806fffb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.240898 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-config-volume\") pod \"collect-profiles-29486145-rzp9s\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.241011 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-registration-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.245132 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2699af1d-57a0-4ce2-9550-b423f9eafc0f-tmpfs\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.245892 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62eff2ee-82ac-4672-9109-3a72c02f32e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdt9c\" (UID: \"62eff2ee-82ac-4672-9109-3a72c02f32e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.247104 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62eff2ee-82ac-4672-9109-3a72c02f32e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdt9c\" (UID: \"62eff2ee-82ac-4672-9109-3a72c02f32e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.247413 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-images\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.247434 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/41b1eece-7199-4214-9add-15fd7c3039c7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-25dzs\" (UID: \"41b1eece-7199-4214-9add-15fd7c3039c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.247763 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/582f83b4-97dc-4f56-9879-c73fab80488a-profile-collector-cert\") pod \"olm-operator-6b444d44fb-g5xkl\" (UID: \"582f83b4-97dc-4f56-9879-c73fab80488a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.248101 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-csi-data-dir\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.248813 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9acb5d27-9286-4d3d-9e36-237482223717-config\") pod \"service-ca-operator-777779d784-7hq88\" (UID: \"9acb5d27-9286-4d3d-9e36-237482223717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.250266 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3d080f1-896a-4b64-8ff3-05db0fd12be3-trusted-ca\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.263922 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4a8ae231-47bc-49ee-8413-de5a08c05d08-metrics-tls\") pod \"dns-default-z28b7\" (UID: \"4a8ae231-47bc-49ee-8413-de5a08c05d08\") " pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.264580 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ff3d137-2611-43f9-9825-4839a271fc69-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-zbtbp\" (UID: \"4ff3d137-2611-43f9-9825-4839a271fc69\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.265308 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e1f4986a-71c2-4cba-a049-8f1ea07cfd17-cert\") pod \"ingress-canary-bx4k5\" (UID: \"e1f4986a-71c2-4cba-a049-8f1ea07cfd17\") " pod="openshift-ingress-canary/ingress-canary-bx4k5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.265414 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d3d080f1-896a-4b64-8ff3-05db0fd12be3-metrics-tls\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.265860 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2699af1d-57a0-4ce2-9550-b423f9eafc0f-apiservice-cert\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.265953 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mwzzv\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.266165 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/752d7a7c-100b-4c07-a601-96309d9e4a33-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-78x6m\" (UID: \"752d7a7c-100b-4c07-a601-96309d9e4a33\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.266698 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bbdd3e92-1864-4a2b-9284-720a4813247a-node-bootstrap-token\") pod \"machine-config-server-cshcx\" (UID: \"bbdd3e92-1864-4a2b-9284-720a4813247a\") " pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.266707 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c1ba660-8691-49e2-b0cc-056355d82f4c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4g249\" (UID: \"2c1ba660-8691-49e2-b0cc-056355d82f4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.267097 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/582f83b4-97dc-4f56-9879-c73fab80488a-srv-cert\") pod \"olm-operator-6b444d44fb-g5xkl\" (UID: \"582f83b4-97dc-4f56-9879-c73fab80488a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.267287 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr2p5\" (UniqueName: \"kubernetes.io/projected/3670e4c8-6d81-4ace-ad05-7c045097b991-kube-api-access-dr2p5\") pod \"etcd-operator-b45778765-gc69w\" (UID: \"3670e4c8-6d81-4ace-ad05-7c045097b991\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.267777 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/843c383b-053f-42f5-88ce-7a216f5354a3-profile-collector-cert\") pod \"catalog-operator-68c6474976-42cdm\" (UID: \"843c383b-053f-42f5-88ce-7a216f5354a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.269502 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-secret-volume\") pod \"collect-profiles-29486145-rzp9s\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.273571 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mwzzv\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.276271 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxs7x\" (UniqueName: \"kubernetes.io/projected/d374fa65-2538-47f8-abc5-5f7eac853d58-kube-api-access-fxs7x\") pod \"openshift-apiserver-operator-796bbdcf4f-xhktr\" (UID: \"d374fa65-2538-47f8-abc5-5f7eac853d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.276708 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bbdd3e92-1864-4a2b-9284-720a4813247a-certs\") pod \"machine-config-server-cshcx\" (UID: \"bbdd3e92-1864-4a2b-9284-720a4813247a\") " pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.282116 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/843c383b-053f-42f5-88ce-7a216f5354a3-srv-cert\") pod \"catalog-operator-68c6474976-42cdm\" (UID: \"843c383b-053f-42f5-88ce-7a216f5354a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.294266 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e79e3141-542e-448b-a2fd-2ac6fc6ef33b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zv7b2\" (UID: \"e79e3141-542e-448b-a2fd-2ac6fc6ef33b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.306222 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k665n\" (UniqueName: \"kubernetes.io/projected/34e16446-9445-4646-bf3b-08764f77f949-kube-api-access-k665n\") pod \"console-f9d7485db-bpdjt\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.323737 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-bound-sa-token\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.325708 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.326332 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:53.826317173 +0000 UTC m=+137.995389399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.347189 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcbpw\" (UniqueName: \"kubernetes.io/projected/abfb95e1-29f2-4c22-8ec7-6683cf251601-kube-api-access-zcbpw\") pod \"openshift-controller-manager-operator-756b6f6bc6-v2sh5\" (UID: \"abfb95e1-29f2-4c22-8ec7-6683cf251601\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.364592 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.372762 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b7e3b68e-b5c0-4446-9d59-39be6a478326-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vfc5n\" (UID: \"b7e3b68e-b5c0-4446-9d59-39be6a478326\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.379855 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.388966 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.407306 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xnfn\" (UniqueName: \"kubernetes.io/projected/67fa6cdb-c380-4b05-a05d-9df4a4b86019-kube-api-access-7xnfn\") pod \"oauth-openshift-558db77b4-gk4fh\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.416043 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.420069 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5kq\" (UniqueName: \"kubernetes.io/projected/9a3c42c2-81aa-404b-ad80-7d534f6a6007-kube-api-access-hr5kq\") pod \"dns-operator-744455d44c-gzd5x\" (UID: \"9a3c42c2-81aa-404b-ad80-7d534f6a6007\") " pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.425879 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.431375 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.431727 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:53.931714864 +0000 UTC m=+138.100787090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.434488 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.445787 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.451171 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7zdb\" (UniqueName: \"kubernetes.io/projected/adc78811-9b09-4d82-bba2-2a63f0c52f7b-kube-api-access-t7zdb\") pod \"machine-config-controller-84d6567774-n4z5m\" (UID: \"adc78811-9b09-4d82-bba2-2a63f0c52f7b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.453659 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.461338 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.492350 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nkt6\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-kube-api-access-4nkt6\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.529215 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4jvq\" (UniqueName: \"kubernetes.io/projected/f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb-kube-api-access-k4jvq\") pod \"csi-hostpathplugin-g7l9x\" (UID: \"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb\") " pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.541052 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.541530 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.04151505 +0000 UTC m=+138.210587276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.552322 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcgs9\" (UniqueName: \"kubernetes.io/projected/4a8ae231-47bc-49ee-8413-de5a08c05d08-kube-api-access-zcgs9\") pod \"dns-default-z28b7\" (UID: \"4a8ae231-47bc-49ee-8413-de5a08c05d08\") " pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.594205 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62eff2ee-82ac-4672-9109-3a72c02f32e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rdt9c\" (UID: \"62eff2ee-82ac-4672-9109-3a72c02f32e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.597054 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsb9q\" (UniqueName: \"kubernetes.io/projected/d3d080f1-896a-4b64-8ff3-05db0fd12be3-kube-api-access-tsb9q\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.598295 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nb9c4\" (UniqueName: \"kubernetes.io/projected/3fbcdfcf-19cc-46b9-a986-bd9426751459-kube-api-access-nb9c4\") pod \"router-default-5444994796-swk7h\" (UID: \"3fbcdfcf-19cc-46b9-a986-bd9426751459\") " pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.620303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsqld\" (UniqueName: \"kubernetes.io/projected/9acb5d27-9286-4d3d-9e36-237482223717-kube-api-access-xsqld\") pod \"service-ca-operator-777779d784-7hq88\" (UID: \"9acb5d27-9286-4d3d-9e36-237482223717\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.625089 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zt6t\" (UniqueName: \"kubernetes.io/projected/422b74fd-82b6-4fe3-b9e6-fc044ec8436f-kube-api-access-6zt6t\") pod \"machine-config-operator-74547568cd-2crw8\" (UID: \"422b74fd-82b6-4fe3-b9e6-fc044ec8436f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.628404 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-729dk\" (UniqueName: \"kubernetes.io/projected/41b1eece-7199-4214-9add-15fd7c3039c7-kube-api-access-729dk\") pod \"control-plane-machine-set-operator-78cbb6b69f-25dzs\" (UID: \"41b1eece-7199-4214-9add-15fd7c3039c7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.630155 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.641242 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpgs4\" (UniqueName: \"kubernetes.io/projected/2699af1d-57a0-4ce2-9550-b423f9eafc0f-kube-api-access-vpgs4\") pod \"packageserver-d55dfcdfc-xwjxp\" (UID: \"2699af1d-57a0-4ce2-9550-b423f9eafc0f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.647265 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.647630 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.147618289 +0000 UTC m=+138.316690515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.657716 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-z28b7" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.664153 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjqdh\" (UniqueName: \"kubernetes.io/projected/3e0d8d02-d114-4cc4-9a04-823669e39fa2-kube-api-access-zjqdh\") pod \"service-ca-9c57cc56f-8mx4s\" (UID: \"3e0d8d02-d114-4cc4-9a04-823669e39fa2\") " pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.675932 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.676666 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzqq2\" (UniqueName: \"kubernetes.io/projected/582f83b4-97dc-4f56-9879-c73fab80488a-kube-api-access-tzqq2\") pod \"olm-operator-6b444d44fb-g5xkl\" (UID: \"582f83b4-97dc-4f56-9879-c73fab80488a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.696211 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.701157 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds5k7\" (UniqueName: \"kubernetes.io/projected/4ff3d137-2611-43f9-9825-4839a271fc69-kube-api-access-ds5k7\") pod \"kube-storage-version-migrator-operator-b67b599dd-zbtbp\" (UID: \"4ff3d137-2611-43f9-9825-4839a271fc69\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.728641 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e265134c-b7db-4575-a84b-bc2c6806fffb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-w8js8\" (UID: \"e265134c-b7db-4575-a84b-bc2c6806fffb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.749244 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99ztv\" (UniqueName: \"kubernetes.io/projected/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-kube-api-access-99ztv\") pod \"collect-profiles-29486145-rzp9s\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.749730 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.750009 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.249997518 +0000 UTC m=+138.419069744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.759956 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgnt8\" (UniqueName: \"kubernetes.io/projected/e1f4986a-71c2-4cba-a049-8f1ea07cfd17-kube-api-access-sgnt8\") pod \"ingress-canary-bx4k5\" (UID: \"e1f4986a-71c2-4cba-a049-8f1ea07cfd17\") " pod="openshift-ingress-canary/ingress-canary-bx4k5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.770529 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.772556 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3d080f1-896a-4b64-8ff3-05db0fd12be3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8xqt5\" (UID: \"d3d080f1-896a-4b64-8ff3-05db0fd12be3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.779584 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.790686 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cflks\" (UniqueName: \"kubernetes.io/projected/843c383b-053f-42f5-88ce-7a216f5354a3-kube-api-access-cflks\") pod \"catalog-operator-68c6474976-42cdm\" (UID: \"843c383b-053f-42f5-88ce-7a216f5354a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.790984 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.795267 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.804077 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.813435 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.822794 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkf4m\" (UniqueName: \"kubernetes.io/projected/bbdd3e92-1864-4a2b-9284-720a4813247a-kube-api-access-vkf4m\") pod \"machine-config-server-cshcx\" (UID: \"bbdd3e92-1864-4a2b-9284-720a4813247a\") " pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.837217 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.841222 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj5tk\" (UniqueName: \"kubernetes.io/projected/752d7a7c-100b-4c07-a601-96309d9e4a33-kube-api-access-bj5tk\") pod \"multus-admission-controller-857f4d67dd-78x6m\" (UID: \"752d7a7c-100b-4c07-a601-96309d9e4a33\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.851324 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.851697 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.35168488 +0000 UTC m=+138.520757106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.854186 4865 csr.go:261] certificate signing request csr-2w84z is approved, waiting to be issued Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.854357 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.881963 4865 csr.go:257] certificate signing request csr-2w84z is issued Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.882355 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7t2x\" (UniqueName: \"kubernetes.io/projected/8b1224b1-f7b9-48de-9842-9c0c91f4d96a-kube-api-access-g7t2x\") pod \"migrator-59844c95c7-tb57w\" (UID: \"8b1224b1-f7b9-48de-9842-9c0c91f4d96a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.882743 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.882914 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.888962 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn69j\" (UniqueName: \"kubernetes.io/projected/401d6c1a-be67-4fb7-97f6-d46e3ba35960-kube-api-access-zn69j\") pod \"marketplace-operator-79b997595-mwzzv\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.892182 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.898826 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.914813 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.925858 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v684\" (UniqueName: \"kubernetes.io/projected/2c1ba660-8691-49e2-b0cc-056355d82f4c-kube-api-access-2v684\") pod \"package-server-manager-789f6589d5-4g249\" (UID: \"2c1ba660-8691-49e2-b0cc-056355d82f4c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.940860 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cshcx" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.950860 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bx4k5" Jan 23 11:54:53 crc kubenswrapper[4865]: I0123 11:54:53.952644 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:53 crc kubenswrapper[4865]: E0123 11:54:53.953034 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.453012974 +0000 UTC m=+138.622085200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.019188 4865 generic.go:334] "Generic (PLEG): container finished" podID="51f498e1-f13f-4977-a3e3-ea8bc6b75c6f" containerID="41d9c2bdb881468bfefd13a9e6b823e2c969681c21fb60f67c7e7b6c99deb5f8" exitCode=0 Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.019278 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" event={"ID":"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f","Type":"ContainerDied","Data":"41d9c2bdb881468bfefd13a9e6b823e2c969681c21fb60f67c7e7b6c99deb5f8"} Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.039428 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5"] Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.053705 4865 generic.go:334] "Generic (PLEG): container finished" podID="0cab2dc0-42b2-4029-8388-b20c287698bc" containerID="4e10292f434175aa91edd2904b1fab5da1e6d351a467ab3059676b87b8bd0583" exitCode=0 Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.053800 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" event={"ID":"0cab2dc0-42b2-4029-8388-b20c287698bc","Type":"ContainerDied","Data":"4e10292f434175aa91edd2904b1fab5da1e6d351a467ab3059676b87b8bd0583"} Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.058194 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.058583 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.55857022 +0000 UTC m=+138.727642446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.079968 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" event={"ID":"2a66fefc-bc9d-4922-821a-63e84b87e740","Type":"ContainerStarted","Data":"31d1d25c82139b728354bc2d5b355f808b8ef4240e8f8fd2d09f7c869c64ab63"} Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.098285 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerStarted","Data":"eeb461b6cb630a97f0fbc5e12f059a8993d241deb81f696209187f4282c21944"} Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.098320 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.106574 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.122856 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.127537 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.146846 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.159339 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.162785 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.662751192 +0000 UTC m=+138.831823408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.165348 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.165871 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.170657 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.670640091 +0000 UTC m=+138.839712317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.206864 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.207192 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.266729 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.267143 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.767126539 +0000 UTC m=+138.936198765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.373291 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.374248 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.874234711 +0000 UTC m=+139.043306937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.438965 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn"] Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.474894 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.475725 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:54.975710209 +0000 UTC m=+139.144782435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.526840 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hxj4k" podStartSLOduration=118.526824181 podStartE2EDuration="1m58.526824181s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:54.487201043 +0000 UTC m=+138.656273269" watchObservedRunningTime="2026-01-23 11:54:54.526824181 +0000 UTC m=+138.695896407" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.570344 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-fmgw6" podStartSLOduration=120.570328121 podStartE2EDuration="2m0.570328121s" podCreationTimestamp="2026-01-23 11:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:54.525329635 +0000 UTC m=+138.694401861" watchObservedRunningTime="2026-01-23 11:54:54.570328121 +0000 UTC m=+138.739400347" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.576850 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.577110 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:55.077098444 +0000 UTC m=+139.246170680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.677463 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.677941 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:55.177925926 +0000 UTC m=+139.346998152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.698110 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" podStartSLOduration=119.698094558 podStartE2EDuration="1m59.698094558s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:54.692786541 +0000 UTC m=+138.861858777" watchObservedRunningTime="2026-01-23 11:54:54.698094558 +0000 UTC m=+138.867166784" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.779061 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.779452 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:55.279434543 +0000 UTC m=+139.448506769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.798131 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-48b72"] Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.881459 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.881637 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:55.381593037 +0000 UTC m=+139.550665263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.882074 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.882337 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:55.382325565 +0000 UTC m=+139.551397791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.892010 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 11:49:53 +0000 UTC, rotation deadline is 2026-12-14 06:21:43.761821866 +0000 UTC Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.892046 4865 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7794h26m48.869778698s for next certificate rotation Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.955061 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podStartSLOduration=119.955038595 podStartE2EDuration="1m59.955038595s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:54.949047972 +0000 UTC m=+139.118120198" watchObservedRunningTime="2026-01-23 11:54:54.955038595 +0000 UTC m=+139.124110821" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.956034 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" podStartSLOduration=119.956029759 podStartE2EDuration="1m59.956029759s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:54.89213373 +0000 UTC m=+139.061205956" watchObservedRunningTime="2026-01-23 11:54:54.956029759 +0000 UTC m=+139.125101985" Jan 23 11:54:54 crc kubenswrapper[4865]: I0123 11:54:54.987930 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:54 crc kubenswrapper[4865]: E0123 11:54:54.988730 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:55.48871308 +0000 UTC m=+139.657785306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:54.997747 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bpdjt"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.044928 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podStartSLOduration=120.044906744 podStartE2EDuration="2m0.044906744s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:55.021259689 +0000 UTC m=+139.190331905" watchObservedRunningTime="2026-01-23 11:54:55.044906744 +0000 UTC m=+139.213978970" Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.068208 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-g7l9x"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.071228 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-z28b7"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.092564 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:55 crc kubenswrapper[4865]: E0123 11:54:55.092896 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:55.592880492 +0000 UTC m=+139.761952718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.138892 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" event={"ID":"03a35268-fb83-4d1b-8880-ed275cc23052","Type":"ContainerStarted","Data":"d3695d0b85d7fb2e38550547c4cda56f94034e3b280dafa27e22f035b1afb82a"} Jan 23 11:54:55 crc kubenswrapper[4865]: W0123 11:54:55.139000 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e16446_9445_4646_bf3b_08764f77f949.slice/crio-1e4475de300ed1208deefe344a3dbd534eb2cf9438ea074ada0a64a5c3a33026 WatchSource:0}: Error finding container 1e4475de300ed1208deefe344a3dbd534eb2cf9438ea074ada0a64a5c3a33026: Status 404 returned error can't find the container with id 1e4475de300ed1208deefe344a3dbd534eb2cf9438ea074ada0a64a5c3a33026 Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.152693 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cshcx" event={"ID":"bbdd3e92-1864-4a2b-9284-720a4813247a","Type":"ContainerStarted","Data":"cd18d98c76f7421ba6735959d0cad4fba3368321c87c57c46ebb6bbe54012e36"} Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.182817 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-swk7h" event={"ID":"3fbcdfcf-19cc-46b9-a986-bd9426751459","Type":"ContainerStarted","Data":"85d851fcbf1f53a27235994739e2b370d1223d705304ce6cbc755eba40390921"} Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.203856 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:55 crc kubenswrapper[4865]: E0123 11:54:55.204221 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:55.704206385 +0000 UTC m=+139.873278601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.219157 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-48b72" event={"ID":"bdee5ba9-99e1-495c-9b52-f670cbbffea2","Type":"ContainerStarted","Data":"812f9016761fee1beddea61ed97f186f1edd18be14c291e5706e5a701b765324"} Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.232077 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" event={"ID":"abfb95e1-29f2-4c22-8ec7-6683cf251601","Type":"ContainerStarted","Data":"0e1362f23e6a163d8873478dd9b6564842b40886e7870cc450e7a217cce48c44"} Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.305247 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:55 crc kubenswrapper[4865]: E0123 11:54:55.306670 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:55.806658456 +0000 UTC m=+139.975730682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.322746 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gk4fh"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.332369 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" podStartSLOduration=119.33235038 podStartE2EDuration="1m59.33235038s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:55.328804616 +0000 UTC m=+139.497876842" watchObservedRunningTime="2026-01-23 11:54:55.33235038 +0000 UTC m=+139.501422606" Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.334619 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-gzd5x"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.349873 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.382641 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.406494 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.413059 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:55 crc kubenswrapper[4865]: E0123 11:54:55.413460 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:55.91343076 +0000 UTC m=+140.082502986 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.465677 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gc69w"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.517271 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:55 crc kubenswrapper[4865]: E0123 11:54:55.519531 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:56.019517428 +0000 UTC m=+140.188589654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:55 crc kubenswrapper[4865]: W0123 11:54:55.528566 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadc78811_9b09_4d82_bba2_2a63f0c52f7b.slice/crio-0e11f9ee9e57198b547dc3655e1153468838db910cb4f4545623b7e463ab40ce WatchSource:0}: Error finding container 0e11f9ee9e57198b547dc3655e1153468838db910cb4f4545623b7e463ab40ce: Status 404 returned error can't find the container with id 0e11f9ee9e57198b547dc3655e1153468838db910cb4f4545623b7e463ab40ce Jan 23 11:54:55 crc kubenswrapper[4865]: W0123 11:54:55.578997 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode79e3141_542e_448b_a2fd_2ac6fc6ef33b.slice/crio-3012a7cc24a113d4f8dd096ab8ad7da6ae4dac1c6f6e443d6ea64db1a7928825 WatchSource:0}: Error finding container 3012a7cc24a113d4f8dd096ab8ad7da6ae4dac1c6f6e443d6ea64db1a7928825: Status 404 returned error can't find the container with id 3012a7cc24a113d4f8dd096ab8ad7da6ae4dac1c6f6e443d6ea64db1a7928825 Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.620008 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:55 crc kubenswrapper[4865]: E0123 11:54:55.620680 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:56.120664458 +0000 UTC m=+140.289736684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.706517 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.723635 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:55 crc kubenswrapper[4865]: E0123 11:54:55.723938 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:56.223924088 +0000 UTC m=+140.392996314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.724890 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.734373 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bx4k5"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.803995 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm"] Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.828667 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:55 crc kubenswrapper[4865]: E0123 11:54:55.828958 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:56.32894378 +0000 UTC m=+140.498016006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.931175 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:55 crc kubenswrapper[4865]: E0123 11:54:55.931452 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:56.431440082 +0000 UTC m=+140.600512308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:55 crc kubenswrapper[4865]: I0123 11:54:55.991573 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.014817 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-8mx4s"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.026703 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mwzzv"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.033088 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:56 crc kubenswrapper[4865]: E0123 11:54:56.033831 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:56.533813871 +0000 UTC m=+140.702886097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.041166 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.107189 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5"] Jan 23 11:54:56 crc kubenswrapper[4865]: W0123 11:54:56.132220 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod843c383b_053f_42f5_88ce_7a216f5354a3.slice/crio-113d26d3cbd75c852802478115044a4ebed7185c8ee3a6b2eaada651248e8442 WatchSource:0}: Error finding container 113d26d3cbd75c852802478115044a4ebed7185c8ee3a6b2eaada651248e8442: Status 404 returned error can't find the container with id 113d26d3cbd75c852802478115044a4ebed7185c8ee3a6b2eaada651248e8442 Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.155886 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:56 crc kubenswrapper[4865]: E0123 11:54:56.156427 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:56.656405734 +0000 UTC m=+140.825477960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.220111 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.256592 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:56 crc kubenswrapper[4865]: E0123 11:54:56.256895 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:56.756881078 +0000 UTC m=+140.925953304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.300858 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.329587 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" event={"ID":"843c383b-053f-42f5-88ce-7a216f5354a3","Type":"ContainerStarted","Data":"113d26d3cbd75c852802478115044a4ebed7185c8ee3a6b2eaada651248e8442"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.354822 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.363483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:56 crc kubenswrapper[4865]: E0123 11:54:56.363778 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:56.863765684 +0000 UTC m=+141.032837910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.372024 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerStarted","Data":"e97c8628a5a41834d693440e3587544538d58e8ac0f48b3f3c8447e0f70cc92f"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.386641 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.398968 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.422941 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.428373 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" event={"ID":"0cab2dc0-42b2-4029-8388-b20c287698bc","Type":"ContainerStarted","Data":"495fa0b44e7bcc635fb49fc0928d29d65254323d4a22f5b88a06b312c0cd8fbe"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.443641 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" event={"ID":"9a3c42c2-81aa-404b-ad80-7d534f6a6007","Type":"ContainerStarted","Data":"b723945215fc80d1d3041ef88e12353bbbfb8e86b5afb9b9fa1e846f415ad248"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.457326 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7hq88"] Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.464055 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.464339 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249"] Jan 23 11:54:56 crc kubenswrapper[4865]: E0123 11:54:56.464421 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:56.964406122 +0000 UTC m=+141.133478348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.502632 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" event={"ID":"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f","Type":"ContainerStarted","Data":"33310dbdbff25f4a25a33ac251e7c61945b497e6e6ce94aa64b47f37d8895a56"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.527492 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" event={"ID":"3670e4c8-6d81-4ace-ad05-7c045097b991","Type":"ContainerStarted","Data":"f9a912551e356ac8ce7de5a52d42c21b8bade117e6ceee8c7c32c02317589e4b"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.534975 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-78x6m"] Jan 23 11:54:56 crc kubenswrapper[4865]: W0123 11:54:56.539065 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod422b74fd_82b6_4fe3_b9e6_fc044ec8436f.slice/crio-aa1e78ab1fc6a4ef728981232c089073688e054dfc67066138bd161e198b46ff WatchSource:0}: Error finding container aa1e78ab1fc6a4ef728981232c089073688e054dfc67066138bd161e198b46ff: Status 404 returned error can't find the container with id aa1e78ab1fc6a4ef728981232c089073688e054dfc67066138bd161e198b46ff Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.554063 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" event={"ID":"e79e3141-542e-448b-a2fd-2ac6fc6ef33b","Type":"ContainerStarted","Data":"3012a7cc24a113d4f8dd096ab8ad7da6ae4dac1c6f6e443d6ea64db1a7928825"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.565119 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:56 crc kubenswrapper[4865]: E0123 11:54:56.570629 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:57.070581692 +0000 UTC m=+141.239653998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:56 crc kubenswrapper[4865]: W0123 11:54:56.577658 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9acb5d27_9286_4d3d_9e36_237482223717.slice/crio-6c0f2ea7e9b4d21e5934331d0ab924d283dcf57862fb822c71d29ad4d6f00be1 WatchSource:0}: Error finding container 6c0f2ea7e9b4d21e5934331d0ab924d283dcf57862fb822c71d29ad4d6f00be1: Status 404 returned error can't find the container with id 6c0f2ea7e9b4d21e5934331d0ab924d283dcf57862fb822c71d29ad4d6f00be1 Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.601349 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bx4k5" event={"ID":"e1f4986a-71c2-4cba-a049-8f1ea07cfd17","Type":"ContainerStarted","Data":"e7dd0ca2b5f76c8487390b70e3eb039053f147ba4009467564ebea6400f4e18b"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.626164 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" event={"ID":"2699af1d-57a0-4ce2-9550-b423f9eafc0f","Type":"ContainerStarted","Data":"baa1785f683855c9740c6be7d4ace12e142988e7e1c36b8a818a442e39de4dda"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.667347 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:56 crc kubenswrapper[4865]: E0123 11:54:56.667968 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:57.167951521 +0000 UTC m=+141.337023737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.696199 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cshcx" event={"ID":"bbdd3e92-1864-4a2b-9284-720a4813247a","Type":"ContainerStarted","Data":"0da7b8ce6cdfde98e72bebe16ebf2c8ddd1974926d69515ca140aef52a09ab4f"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.744447 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" event={"ID":"d374fa65-2538-47f8-abc5-5f7eac853d58","Type":"ContainerStarted","Data":"8682485d00ab3cbb7b77e626973494bf3e6d72bd1ae361b5320978e062a422d9"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.767010 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.767577 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.768662 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bpdjt" event={"ID":"34e16446-9445-4646-bf3b-08764f77f949","Type":"ContainerStarted","Data":"c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.768699 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bpdjt" event={"ID":"34e16446-9445-4646-bf3b-08764f77f949","Type":"ContainerStarted","Data":"1e4475de300ed1208deefe344a3dbd534eb2cf9438ea074ada0a64a5c3a33026"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.770053 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:56 crc kubenswrapper[4865]: E0123 11:54:56.771060 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:57.271043577 +0000 UTC m=+141.440115803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.781375 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" event={"ID":"03a35268-fb83-4d1b-8880-ed275cc23052","Type":"ContainerStarted","Data":"93de93c3647b3e254e94fbae16b7c06d687a19854396fc1f51d74d35590a44d5"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.810458 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" event={"ID":"adc78811-9b09-4d82-bba2-2a63f0c52f7b","Type":"ContainerStarted","Data":"0e11f9ee9e57198b547dc3655e1153468838db910cb4f4545623b7e463ab40ce"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.828683 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-48b72" event={"ID":"bdee5ba9-99e1-495c-9b52-f670cbbffea2","Type":"ContainerStarted","Data":"1420969c42f470d4a513235aeea7c05ddcfab5fb6d197c86b4cfc87c977c6dc8"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.829539 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.843087 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.843176 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.872248 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:56 crc kubenswrapper[4865]: E0123 11:54:56.873823 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:57.373800175 +0000 UTC m=+141.542872401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.942387 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z28b7" event={"ID":"4a8ae231-47bc-49ee-8413-de5a08c05d08","Type":"ContainerStarted","Data":"ec3c0d3f51ff05be1613c65a3e704094262c201f29bd35eb0623349ad73ce6b7"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.959465 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-swk7h" event={"ID":"3fbcdfcf-19cc-46b9-a986-bd9426751459","Type":"ContainerStarted","Data":"811113bc6d4b48797e3257d6ecb031d47d0fdf3124b5f86b193d7c7a21255914"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.974122 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:56 crc kubenswrapper[4865]: E0123 11:54:56.974747 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:57.47473493 +0000 UTC m=+141.643807156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.975108 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" event={"ID":"67fa6cdb-c380-4b05-a05d-9df4a4b86019","Type":"ContainerStarted","Data":"14cb46d8e83ead72545c309bf2015bc390250d8b319ee04e8ea75d6df879f032"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.975844 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.986154 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" event={"ID":"b7e3b68e-b5c0-4446-9d59-39be6a478326","Type":"ContainerStarted","Data":"8642dcea836337bf04cb7f79d7b0806f0629bb444402a6485aed471dbf9ca5b8"} Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.988706 4865 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gk4fh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.988739 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" podUID="67fa6cdb-c380-4b05-a05d-9df4a4b86019" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Jan 23 11:54:56 crc kubenswrapper[4865]: I0123 11:54:56.991087 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" event={"ID":"abfb95e1-29f2-4c22-8ec7-6683cf251601","Type":"ContainerStarted","Data":"dc6ad9567803715ec5227d275b06e0217d862383aa7b4862ca21d39a236ea83e"} Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.074802 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:57 crc kubenswrapper[4865]: E0123 11:54:57.076755 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:57.57673703 +0000 UTC m=+141.745809256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.172535 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" podStartSLOduration=121.172515762 podStartE2EDuration="2m1.172515762s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:57.12101981 +0000 UTC m=+141.290092036" watchObservedRunningTime="2026-01-23 11:54:57.172515762 +0000 UTC m=+141.341587988" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.173872 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-48b72" podStartSLOduration=122.173865054 podStartE2EDuration="2m2.173865054s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:57.171677181 +0000 UTC m=+141.340749407" watchObservedRunningTime="2026-01-23 11:54:57.173865054 +0000 UTC m=+141.342937280" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.177512 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:57 crc kubenswrapper[4865]: E0123 11:54:57.178199 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:57.678185487 +0000 UTC m=+141.847257713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.256452 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-swk7h" podStartSLOduration=122.256430319 podStartE2EDuration="2m2.256430319s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:57.250452485 +0000 UTC m=+141.419524711" watchObservedRunningTime="2026-01-23 11:54:57.256430319 +0000 UTC m=+141.425502545" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.279343 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:57 crc kubenswrapper[4865]: E0123 11:54:57.280081 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:57.780064204 +0000 UTC m=+141.949136420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.372507 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" podStartSLOduration=122.372482295 podStartE2EDuration="2m2.372482295s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:57.37017679 +0000 UTC m=+141.539249016" watchObservedRunningTime="2026-01-23 11:54:57.372482295 +0000 UTC m=+141.541554521" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.373367 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-v2sh5" podStartSLOduration=122.373360936 podStartE2EDuration="2m2.373360936s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:57.305778779 +0000 UTC m=+141.474851025" watchObservedRunningTime="2026-01-23 11:54:57.373360936 +0000 UTC m=+141.542433162" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.383735 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:57 crc kubenswrapper[4865]: E0123 11:54:57.384246 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:57.884223956 +0000 UTC m=+142.053296182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.431930 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cshcx" podStartSLOduration=7.4319061269999995 podStartE2EDuration="7.431906127s" podCreationTimestamp="2026-01-23 11:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:57.43121544 +0000 UTC m=+141.600287666" watchObservedRunningTime="2026-01-23 11:54:57.431906127 +0000 UTC m=+141.600978353" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.468820 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-bpdjt" podStartSLOduration=122.468803529 podStartE2EDuration="2m2.468803529s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:57.466104074 +0000 UTC m=+141.635176300" watchObservedRunningTime="2026-01-23 11:54:57.468803529 +0000 UTC m=+141.637875755" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.484405 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:57 crc kubenswrapper[4865]: E0123 11:54:57.484968 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:57.984945216 +0000 UTC m=+142.154017442 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.568940 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.594100 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:57 crc kubenswrapper[4865]: E0123 11:54:57.594502 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:58.094486136 +0000 UTC m=+142.263558362 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.695595 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:57 crc kubenswrapper[4865]: E0123 11:54:57.696113 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:58.196072406 +0000 UTC m=+142.365144642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.777503 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.797936 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:54:57 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:54:57 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:54:57 crc kubenswrapper[4865]: healthz check failed Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.797990 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.799095 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:57 crc kubenswrapper[4865]: E0123 11:54:57.799502 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:58.299477879 +0000 UTC m=+142.468550285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.887630 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:57 crc kubenswrapper[4865]: I0123 11:54:57.901819 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:57 crc kubenswrapper[4865]: E0123 11:54:57.902210 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:58.402195797 +0000 UTC m=+142.571268023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.003174 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.003492 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:58.50348153 +0000 UTC m=+142.672553756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.013707 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" event={"ID":"2699af1d-57a0-4ce2-9550-b423f9eafc0f","Type":"ContainerStarted","Data":"295f055b70b23f536bbb0d34672057382274d320b9f85b221c28b54f85445626"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.014861 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.018805 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.018855 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.019775 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerStarted","Data":"a61639e591677ff6558a3b162b8b0f56384d87f66c1768087f94fff3e4308e0f"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.025979 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" event={"ID":"752d7a7c-100b-4c07-a601-96309d9e4a33","Type":"ContainerStarted","Data":"cb9c414e05a93d42eaa173f0ca4bc148fdc49b3c4ed05b2e7570a10deddf43f5"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.027096 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" event={"ID":"d3d080f1-896a-4b64-8ff3-05db0fd12be3","Type":"ContainerStarted","Data":"c5e3e1a2946bbafabcd5ef2f44a011a834ee00aaabcd53f261a20eaeaa69e236"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.039409 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" event={"ID":"9acb5d27-9286-4d3d-9e36-237482223717","Type":"ContainerStarted","Data":"6c0f2ea7e9b4d21e5934331d0ab924d283dcf57862fb822c71d29ad4d6f00be1"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.081204 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" event={"ID":"422b74fd-82b6-4fe3-b9e6-fc044ec8436f","Type":"ContainerStarted","Data":"ac05a31b3065e85a3ea66a7eb2692c501be2c846c5567b58824e99580c2c8d3a"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.081246 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" event={"ID":"422b74fd-82b6-4fe3-b9e6-fc044ec8436f","Type":"ContainerStarted","Data":"aa1e78ab1fc6a4ef728981232c089073688e054dfc67066138bd161e198b46ff"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.091754 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" event={"ID":"e265134c-b7db-4575-a84b-bc2c6806fffb","Type":"ContainerStarted","Data":"7b2acae1cc6a4d17da716bc748c17b85ee4b38fb186a23cc1c64b0a5a49de2ce"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.092887 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" event={"ID":"9a3c42c2-81aa-404b-ad80-7d534f6a6007","Type":"ContainerStarted","Data":"5c7ed78d5b2ef9fb86fc32a05424cacf2731014478ad6f3343fce8f6abd177f8"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.093781 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" event={"ID":"d374fa65-2538-47f8-abc5-5f7eac853d58","Type":"ContainerStarted","Data":"bfe4686f3b7d8d4d153ca4e12dc2a9f1e9ac828aa1e759d6273ed066158f440c"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.103735 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.104652 4865 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gk4fh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.104712 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" podUID="67fa6cdb-c380-4b05-a05d-9df4a4b86019" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.104751 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" event={"ID":"67fa6cdb-c380-4b05-a05d-9df4a4b86019","Type":"ContainerStarted","Data":"53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81"} Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.105067 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:58.60504994 +0000 UTC m=+142.774122166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.115527 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" event={"ID":"03a35268-fb83-4d1b-8880-ed275cc23052","Type":"ContainerStarted","Data":"9e98d2e2904268ac8080a2c06476c284bc4c566686006164d4854a5764d43a53"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.128588 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podStartSLOduration=122.128570272 podStartE2EDuration="2m2.128570272s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.07205402 +0000 UTC m=+142.241126246" watchObservedRunningTime="2026-01-23 11:54:58.128570272 +0000 UTC m=+142.297642498" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.164651 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xhktr" podStartSLOduration=123.164632465 podStartE2EDuration="2m3.164632465s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.129854302 +0000 UTC m=+142.298926528" watchObservedRunningTime="2026-01-23 11:54:58.164632465 +0000 UTC m=+142.333704691" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.165580 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-xfxrn" podStartSLOduration=123.165574747 podStartE2EDuration="2m3.165574747s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.163282582 +0000 UTC m=+142.332354798" watchObservedRunningTime="2026-01-23 11:54:58.165574747 +0000 UTC m=+142.334646973" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.201198 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" event={"ID":"adc78811-9b09-4d82-bba2-2a63f0c52f7b","Type":"ContainerStarted","Data":"a5ed67807c23c1e2dd77fde5b9c6dfcb3de355ac0e28ef99e68d992be24a7e49"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.204998 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.206338 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:58.706318442 +0000 UTC m=+142.875390668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.252221 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" podStartSLOduration=122.25220667 podStartE2EDuration="2m2.25220667s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.251299118 +0000 UTC m=+142.420371344" watchObservedRunningTime="2026-01-23 11:54:58.25220667 +0000 UTC m=+142.421278896" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.278516 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" event={"ID":"e79e3141-542e-448b-a2fd-2ac6fc6ef33b","Type":"ContainerStarted","Data":"8be498de9e4aa182751b34871c2710ac2de2053885e893b10b4797c0a675ebd1"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.305800 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.306034 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:58.806016347 +0000 UTC m=+142.975088573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.306073 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.307185 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:58.807178465 +0000 UTC m=+142.976250691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.307869 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zv7b2" podStartSLOduration=123.307848541 podStartE2EDuration="2m3.307848541s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.305660349 +0000 UTC m=+142.474732575" watchObservedRunningTime="2026-01-23 11:54:58.307848541 +0000 UTC m=+142.476920767" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.320745 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" event={"ID":"b7e3b68e-b5c0-4446-9d59-39be6a478326","Type":"ContainerStarted","Data":"8907b0054518f55ce664b6200be95b1b72a1c4e0279580e6ae1e7deb6c2cf0aa"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.339107 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" event={"ID":"843c383b-053f-42f5-88ce-7a216f5354a3","Type":"ContainerStarted","Data":"574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.339990 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.353254 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.353315 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.356171 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vfc5n" podStartSLOduration=123.356159287 podStartE2EDuration="2m3.356159287s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.355636844 +0000 UTC m=+142.524709070" watchObservedRunningTime="2026-01-23 11:54:58.356159287 +0000 UTC m=+142.525231513" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.372086 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" event={"ID":"3e0d8d02-d114-4cc4-9a04-823669e39fa2","Type":"ContainerStarted","Data":"a3e70f801f87cbecdd65664af3deca997ac32fc47339c6daa2ed5288b2568f2e"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.414564 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w" event={"ID":"8b1224b1-f7b9-48de-9842-9c0c91f4d96a","Type":"ContainerStarted","Data":"6ed4873a66c866def8b305c173e5cad64853152b9d5edb84300f178eed34ee80"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.423178 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.438740 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:58.938697721 +0000 UTC m=+143.107769957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.466313 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" event={"ID":"51f498e1-f13f-4977-a3e3-ea8bc6b75c6f","Type":"ContainerStarted","Data":"3a6cdc07474a7e6f982b1933fcbb49be03e0ebd90d6c5a9cc16b0169ad1283b5"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.470379 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podStartSLOduration=122.470354699 podStartE2EDuration="2m2.470354699s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.421025228 +0000 UTC m=+142.590097454" watchObservedRunningTime="2026-01-23 11:54:58.470354699 +0000 UTC m=+142.639426925" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.474702 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" podStartSLOduration=122.474686812 podStartE2EDuration="2m2.474686812s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.469443507 +0000 UTC m=+142.638515753" watchObservedRunningTime="2026-01-23 11:54:58.474686812 +0000 UTC m=+142.643759038" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.501565 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" event={"ID":"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed","Type":"ContainerStarted","Data":"c671602e3c5fd0e21517838e04ce0a61347597db9991193b05279d7403441f69"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.538638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" event={"ID":"62eff2ee-82ac-4672-9109-3a72c02f32e6","Type":"ContainerStarted","Data":"af986a9ee1d9c048dce70f42095e290453f5d02936625cebe6010e77f81f68da"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.544293 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.546992 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.046979201 +0000 UTC m=+143.216051427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.555030 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" event={"ID":"4ff3d137-2611-43f9-9825-4839a271fc69","Type":"ContainerStarted","Data":"a613e6ad765d40ef05f74133db66c0c8c69c51e65380830fdb6963c5fee120f3"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.559854 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bx4k5" event={"ID":"e1f4986a-71c2-4cba-a049-8f1ea07cfd17","Type":"ContainerStarted","Data":"5acf90ec63654a14462edd5a39befb24729cd5ba3f221f7e4ce1e0eac877d2d7"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.567820 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerStarted","Data":"af083092c38de2ca6827d8fedaa9e40bb8755c65b424296597702d1ed7970262"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.569512 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerStarted","Data":"7da2c84436c83d0ba331f7213b02ad4cb29accbcee8c247e34cc5e1b0195afc5"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.592399 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" event={"ID":"41b1eece-7199-4214-9add-15fd7c3039c7","Type":"ContainerStarted","Data":"2eb1c8f3b0a3cfe492ddfa222f5a99ae82408a2f2497f2ad270308d649fb2b5b"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.607237 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" podStartSLOduration=123.607219312 podStartE2EDuration="2m3.607219312s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.541830549 +0000 UTC m=+142.710902775" watchObservedRunningTime="2026-01-23 11:54:58.607219312 +0000 UTC m=+142.776291538" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.608102 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bx4k5" podStartSLOduration=8.608097884 podStartE2EDuration="8.608097884s" podCreationTimestamp="2026-01-23 11:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.606392333 +0000 UTC m=+142.775464559" watchObservedRunningTime="2026-01-23 11:54:58.608097884 +0000 UTC m=+142.777170110" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.629834 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z28b7" event={"ID":"4a8ae231-47bc-49ee-8413-de5a08c05d08","Type":"ContainerStarted","Data":"120f1563493e2dc1874f277dcf5ead03f4bdec4ef6afbdfb43b1068e0742f66f"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.648935 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.650709 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.150681522 +0000 UTC m=+143.319753748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.651137 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" podStartSLOduration=122.651101553 podStartE2EDuration="2m2.651101553s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.640090949 +0000 UTC m=+142.809163175" watchObservedRunningTime="2026-01-23 11:54:58.651101553 +0000 UTC m=+142.820173779" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.698418 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" event={"ID":"3670e4c8-6d81-4ace-ad05-7c045097b991","Type":"ContainerStarted","Data":"5b38f36a251c223a20ae88429d1b3c472d8307c4b4630e910aa974c3242669f2"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.732174 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" event={"ID":"401d6c1a-be67-4fb7-97f6-d46e3ba35960","Type":"ContainerStarted","Data":"a784fa8d8251162adb391b8b38a65f4d4267e5d4f1110c96bc957680cb44cd3d"} Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.733880 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.733926 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.743274 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.748178 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gc69w" podStartSLOduration=123.748166204 podStartE2EDuration="2m3.748166204s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.745078821 +0000 UTC m=+142.914151047" watchObservedRunningTime="2026-01-23 11:54:58.748166204 +0000 UTC m=+142.917238430" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.758224 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.759232 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.259207639 +0000 UTC m=+143.428279865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.793996 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:54:58 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:54:58 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:54:58 crc kubenswrapper[4865]: healthz check failed Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.794051 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.799447 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" podStartSLOduration=122.799434531 podStartE2EDuration="2m2.799434531s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:58.798792845 +0000 UTC m=+142.967865071" watchObservedRunningTime="2026-01-23 11:54:58.799434531 +0000 UTC m=+142.968506767" Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.868044 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.870186 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.370154312 +0000 UTC m=+143.539226538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:58 crc kubenswrapper[4865]: I0123 11:54:58.970790 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:58 crc kubenswrapper[4865]: E0123 11:54:58.971102 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.471087217 +0000 UTC m=+143.640159443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.073256 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.073584 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.573548868 +0000 UTC m=+143.742621084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.074191 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.074633 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.574590453 +0000 UTC m=+143.743662679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.175235 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.175420 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.675385734 +0000 UTC m=+143.844457960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.175556 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.176031 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.67601503 +0000 UTC m=+143.845087256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.277560 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.277971 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.777945307 +0000 UTC m=+143.947017533 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.378618 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.378940 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.878924863 +0000 UTC m=+144.047997089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.482404 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.482628 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.982577033 +0000 UTC m=+144.151649259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.483126 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.483482 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:54:59.983473504 +0000 UTC m=+144.152545730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.584283 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.584491 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.08447285 +0000 UTC m=+144.253545066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.584553 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.585096 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.085066475 +0000 UTC m=+144.254138911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.686122 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.686322 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.186294976 +0000 UTC m=+144.355367212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.686477 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.686790 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.186778418 +0000 UTC m=+144.355850644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.740498 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" event={"ID":"9a3c42c2-81aa-404b-ad80-7d534f6a6007","Type":"ContainerStarted","Data":"e453245b35372099fa63d327fa72664a99c1c11a922f5e6e29cbc83ad9c4f7c5"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.742850 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-z28b7" event={"ID":"4a8ae231-47bc-49ee-8413-de5a08c05d08","Type":"ContainerStarted","Data":"eb5142c273e2c9ac69ad2f101a689cd0771cb283dc4019a0cae50cef3252c398"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.744617 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" event={"ID":"401d6c1a-be67-4fb7-97f6-d46e3ba35960","Type":"ContainerStarted","Data":"5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.744878 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.746928 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mwzzv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.746984 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" podUID="401d6c1a-be67-4fb7-97f6-d46e3ba35960" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.747801 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerStarted","Data":"adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.747948 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.749393 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.749442 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.751213 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" event={"ID":"62eff2ee-82ac-4672-9109-3a72c02f32e6","Type":"ContainerStarted","Data":"80bd707332632c9bbd6207e7a8db68c25f4bfa22ceb053cd7890a7d6b2720d79"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.753187 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-25dzs" event={"ID":"41b1eece-7199-4214-9add-15fd7c3039c7","Type":"ContainerStarted","Data":"a7fc727d8c69827668164a2f17e4ef7edb1ee518db64ea43865be3a215bb5cbf"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.754921 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" event={"ID":"9acb5d27-9286-4d3d-9e36-237482223717","Type":"ContainerStarted","Data":"50eebaf66b9f8f3dc459a3440f3f64a355e567fb812361855fc9ea2c42b343fb"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.756710 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-8mx4s" event={"ID":"3e0d8d02-d114-4cc4-9a04-823669e39fa2","Type":"ContainerStarted","Data":"2d380b6f6989111f1858546ab0d0b152012679716173346eee67e8ea47339fd1"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.758390 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-n4z5m" event={"ID":"adc78811-9b09-4d82-bba2-2a63f0c52f7b","Type":"ContainerStarted","Data":"9310f55d9416a34b9b4ba204e34df3236e09fa1a21dd44112dd714223de819c3"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.760477 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" event={"ID":"422b74fd-82b6-4fe3-b9e6-fc044ec8436f","Type":"ContainerStarted","Data":"7da77e80e795bf48e43fc94395b226d16fc9223027c020fe2da17f0fde490d34"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.762970 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerStarted","Data":"23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.763026 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerStarted","Data":"aff41b78fcf778e1b27eab73939afff9ddebcb141dce28cd6afa1932f6851ba2"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.763093 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.764682 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" event={"ID":"4ff3d137-2611-43f9-9825-4839a271fc69","Type":"ContainerStarted","Data":"8b215e4b9db3918b79b7bbe63c0d9182a68cf8ff4464265fde57691fa30ee049"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.767140 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" event={"ID":"e265134c-b7db-4575-a84b-bc2c6806fffb","Type":"ContainerStarted","Data":"40d7d904a73451c000394bb8b6c88bbe583f23ba59fa28ba47ef735d846bf8e8"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.769308 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" event={"ID":"d3d080f1-896a-4b64-8ff3-05db0fd12be3","Type":"ContainerStarted","Data":"b2ce3c45dda8656802a5fb742eaa0a0dc87caa85a1a2c4a21319aafb9d3cbb7e"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.769414 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" event={"ID":"d3d080f1-896a-4b64-8ff3-05db0fd12be3","Type":"ContainerStarted","Data":"89cf1bd59127fa986067872b79095c7c7cfbffe9512ec8ad4346da1de9a4a03f"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.775977 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-gzd5x" podStartSLOduration=124.775957931 podStartE2EDuration="2m4.775957931s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:59.769862256 +0000 UTC m=+143.938934482" watchObservedRunningTime="2026-01-23 11:54:59.775957931 +0000 UTC m=+143.945030147" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.778354 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w" event={"ID":"8b1224b1-f7b9-48de-9842-9c0c91f4d96a","Type":"ContainerStarted","Data":"0f99036a1af928677f72410539b454197d89e6d0bc994a749fcf1e82b3c6ac7c"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.778403 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w" event={"ID":"8b1224b1-f7b9-48de-9842-9c0c91f4d96a","Type":"ContainerStarted","Data":"51de647b76df103645c23727bfb9f593a487a7801707f34ac304213ea951d33e"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.779565 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:54:59 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:54:59 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:54:59 crc kubenswrapper[4865]: healthz check failed Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.779643 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.780715 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" event={"ID":"752d7a7c-100b-4c07-a601-96309d9e4a33","Type":"ContainerStarted","Data":"490aea5efba83d4e0a418a50bebd2297f4d81ff2d16fd686e0c1b6a362ae76b4"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.780800 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" event={"ID":"752d7a7c-100b-4c07-a601-96309d9e4a33","Type":"ContainerStarted","Data":"ac0c71708ecc25dfa1f786df3ca56acc76b648388f36abb95f594a0aa812f407"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.783071 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" event={"ID":"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed","Type":"ContainerStarted","Data":"e9c3e1560f5b66efcbb55cce9e1082cfa19890fbcae89e199e02c135ef2d6496"} Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.784730 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.785661 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.788832 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.789011 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.288984432 +0000 UTC m=+144.458056658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.789275 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.789646 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.289638259 +0000 UTC m=+144.458710485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.806960 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.827219 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-w8js8" podStartSLOduration=124.827190437 podStartE2EDuration="2m4.827190437s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:59.826255174 +0000 UTC m=+143.995327410" watchObservedRunningTime="2026-01-23 11:54:59.827190437 +0000 UTC m=+143.996262663" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.878223 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-z28b7" podStartSLOduration=9.878196797 podStartE2EDuration="9.878196797s" podCreationTimestamp="2026-01-23 11:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:59.86073224 +0000 UTC m=+144.029804466" watchObservedRunningTime="2026-01-23 11:54:59.878196797 +0000 UTC m=+144.047269023" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.891283 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.894289 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.394265752 +0000 UTC m=+144.563337978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.916468 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-zbtbp" podStartSLOduration=123.916429921 podStartE2EDuration="2m3.916429921s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:59.907981389 +0000 UTC m=+144.077053635" watchObservedRunningTime="2026-01-23 11:54:59.916429921 +0000 UTC m=+144.085502147" Jan 23 11:54:59 crc kubenswrapper[4865]: I0123 11:54:59.994447 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:54:59 crc kubenswrapper[4865]: E0123 11:54:59.994976 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.49495984 +0000 UTC m=+144.664032066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.040562 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podStartSLOduration=124.04053351 podStartE2EDuration="2m4.04053351s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:00.040378967 +0000 UTC m=+144.209451203" watchObservedRunningTime="2026-01-23 11:55:00.04053351 +0000 UTC m=+144.209605736" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.042890 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8xqt5" podStartSLOduration=125.042880767 podStartE2EDuration="2m5.042880767s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:54:59.961041139 +0000 UTC m=+144.130113365" watchObservedRunningTime="2026-01-23 11:55:00.042880767 +0000 UTC m=+144.211952993" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.096484 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:00 crc kubenswrapper[4865]: E0123 11:55:00.096791 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.596774726 +0000 UTC m=+144.765846942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.112629 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rdt9c" podStartSLOduration=125.112614224 podStartE2EDuration="2m5.112614224s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:00.110544795 +0000 UTC m=+144.279617021" watchObservedRunningTime="2026-01-23 11:55:00.112614224 +0000 UTC m=+144.281686450" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.188573 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podStartSLOduration=124.188557722 podStartE2EDuration="2m4.188557722s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:00.149874476 +0000 UTC m=+144.318946702" watchObservedRunningTime="2026-01-23 11:55:00.188557722 +0000 UTC m=+144.357629948" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.199531 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:00 crc kubenswrapper[4865]: E0123 11:55:00.199836 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.699823601 +0000 UTC m=+144.868895827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.234207 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2crw8" podStartSLOduration=124.234180463 podStartE2EDuration="2m4.234180463s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:00.18892476 +0000 UTC m=+144.357996986" watchObservedRunningTime="2026-01-23 11:55:00.234180463 +0000 UTC m=+144.403252689" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.234710 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7hq88" podStartSLOduration=124.234703796 podStartE2EDuration="2m4.234703796s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:00.233151459 +0000 UTC m=+144.402223705" watchObservedRunningTime="2026-01-23 11:55:00.234703796 +0000 UTC m=+144.403776042" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.270673 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-78x6m" podStartSLOduration=124.270650135 podStartE2EDuration="2m4.270650135s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:00.269217321 +0000 UTC m=+144.438289537" watchObservedRunningTime="2026-01-23 11:55:00.270650135 +0000 UTC m=+144.439722361" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.300476 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:00 crc kubenswrapper[4865]: E0123 11:55:00.300919 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.800892679 +0000 UTC m=+144.969964895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.388612 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" podStartSLOduration=126.388570797 podStartE2EDuration="2m6.388570797s" podCreationTimestamp="2026-01-23 11:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:00.385151814 +0000 UTC m=+144.554224040" watchObservedRunningTime="2026-01-23 11:55:00.388570797 +0000 UTC m=+144.557643023" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.407945 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:00 crc kubenswrapper[4865]: E0123 11:55:00.408381 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:00.90836473 +0000 UTC m=+145.077436956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.414180 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-tb57w" podStartSLOduration=124.414151578 podStartE2EDuration="2m4.414151578s" podCreationTimestamp="2026-01-23 11:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:00.408887882 +0000 UTC m=+144.577960118" watchObservedRunningTime="2026-01-23 11:55:00.414151578 +0000 UTC m=+144.583223804" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.459271 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88ktq"] Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.459554 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" podUID="131a949e-2d37-47b4-8d7e-1f1e1afb9283" containerName="controller-manager" containerID="cri-o://320d71b41214ea352e0e5a25a063242581a6ab1f64cad04090ad89f0897dea40" gracePeriod=30 Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.515064 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:00 crc kubenswrapper[4865]: E0123 11:55:00.515512 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.015495172 +0000 UTC m=+145.184567398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.544761 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.616240 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:00 crc kubenswrapper[4865]: E0123 11:55:00.616547 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.116532789 +0000 UTC m=+145.285605015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.717744 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:00 crc kubenswrapper[4865]: E0123 11:55:00.718008 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.217963786 +0000 UTC m=+145.387036012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.718148 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:00 crc kubenswrapper[4865]: E0123 11:55:00.718575 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.2185672 +0000 UTC m=+145.387639426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.778861 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:00 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:00 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:00 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.779347 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.783904 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.783994 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.792546 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerStarted","Data":"d3f348f30a0017ba243f859f2bc54454e30d67ccd848bfd2587974f95c228e5d"} Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.794418 4865 generic.go:334] "Generic (PLEG): container finished" podID="131a949e-2d37-47b4-8d7e-1f1e1afb9283" containerID="320d71b41214ea352e0e5a25a063242581a6ab1f64cad04090ad89f0897dea40" exitCode=0 Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.794621 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" event={"ID":"131a949e-2d37-47b4-8d7e-1f1e1afb9283","Type":"ContainerDied","Data":"320d71b41214ea352e0e5a25a063242581a6ab1f64cad04090ad89f0897dea40"} Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.798687 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-z28b7" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.801251 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mwzzv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.801332 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" podUID="401d6c1a-be67-4fb7-97f6-d46e3ba35960" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.821407 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:00 crc kubenswrapper[4865]: E0123 11:55:00.821782 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.321764009 +0000 UTC m=+145.490836235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.888710 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.922688 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:00 crc kubenswrapper[4865]: E0123 11:55:00.927074 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.427052318 +0000 UTC m=+145.596124544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:00 crc kubenswrapper[4865]: I0123 11:55:00.950902 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.024929 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.025432 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.52541001 +0000 UTC m=+145.694482236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.132875 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.133316 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.633299971 +0000 UTC m=+145.802372197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.235403 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.236127 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.736101261 +0000 UTC m=+145.905173487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.338451 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.338968 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.838943661 +0000 UTC m=+146.008015887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.343487 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.368231 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f5nf9"] Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.368522 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131a949e-2d37-47b4-8d7e-1f1e1afb9283" containerName="controller-manager" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.368537 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="131a949e-2d37-47b4-8d7e-1f1e1afb9283" containerName="controller-manager" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.368673 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="131a949e-2d37-47b4-8d7e-1f1e1afb9283" containerName="controller-manager" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.369555 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.400222 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.421844 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f5nf9"] Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.430681 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s5sbt"] Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.431703 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.441583 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-config\") pod \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.441659 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-proxy-ca-bundles\") pod \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.441697 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjzms\" (UniqueName: \"kubernetes.io/projected/131a949e-2d37-47b4-8d7e-1f1e1afb9283-kube-api-access-rjzms\") pod \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.441753 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-client-ca\") pod \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.441772 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/131a949e-2d37-47b4-8d7e-1f1e1afb9283-serving-cert\") pod \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\" (UID: \"131a949e-2d37-47b4-8d7e-1f1e1afb9283\") " Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.443442 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "131a949e-2d37-47b4-8d7e-1f1e1afb9283" (UID: "131a949e-2d37-47b4-8d7e-1f1e1afb9283"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.443730 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-config" (OuterVolumeSpecName: "config") pod "131a949e-2d37-47b4-8d7e-1f1e1afb9283" (UID: "131a949e-2d37-47b4-8d7e-1f1e1afb9283"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.444025 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-client-ca" (OuterVolumeSpecName: "client-ca") pod "131a949e-2d37-47b4-8d7e-1f1e1afb9283" (UID: "131a949e-2d37-47b4-8d7e-1f1e1afb9283"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.444763 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.445205 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.445229 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.445243 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/131a949e-2d37-47b4-8d7e-1f1e1afb9283-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.445318 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:01.945297435 +0000 UTC m=+146.114369661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:01 crc kubenswrapper[4865]: W0123 11:55:01.450994 4865 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.451054 4865 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.491019 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/131a949e-2d37-47b4-8d7e-1f1e1afb9283-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "131a949e-2d37-47b4-8d7e-1f1e1afb9283" (UID: "131a949e-2d37-47b4-8d7e-1f1e1afb9283"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.492686 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gzh6l"] Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.493928 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.498139 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131a949e-2d37-47b4-8d7e-1f1e1afb9283-kube-api-access-rjzms" (OuterVolumeSpecName: "kube-api-access-rjzms") pod "131a949e-2d37-47b4-8d7e-1f1e1afb9283" (UID: "131a949e-2d37-47b4-8d7e-1f1e1afb9283"). InnerVolumeSpecName "kube-api-access-rjzms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.519647 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5sbt"] Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.548127 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.548192 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-utilities\") pod \"certified-operators-s5sbt\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.548231 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzwzn\" (UniqueName: \"kubernetes.io/projected/2d701fdb-266c-4e83-a0b6-099bfd0987a9-kube-api-access-mzwzn\") pod \"community-operators-f5nf9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.548274 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zk56\" (UniqueName: \"kubernetes.io/projected/752a7b3b-7850-4bba-b8ce-be070452a538-kube-api-access-5zk56\") pod \"certified-operators-s5sbt\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.548294 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-catalog-content\") pod \"certified-operators-s5sbt\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.548324 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-utilities\") pod \"community-operators-f5nf9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.548361 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-catalog-content\") pod \"community-operators-f5nf9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.548425 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjzms\" (UniqueName: \"kubernetes.io/projected/131a949e-2d37-47b4-8d7e-1f1e1afb9283-kube-api-access-rjzms\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.548438 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/131a949e-2d37-47b4-8d7e-1f1e1afb9283-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.548855 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:02.048840422 +0000 UTC m=+146.217912648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.549064 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gzh6l"] Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.622672 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cgp9n"] Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.623396 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.650140 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.650406 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-catalog-content\") pod \"certified-operators-s5sbt\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.650449 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-utilities\") pod \"community-operators-f5nf9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.650488 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-catalog-content\") pod \"community-operators-gzh6l\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.650509 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-utilities\") pod \"community-operators-gzh6l\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.650534 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-catalog-content\") pod \"community-operators-f5nf9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.650579 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxjp\" (UniqueName: \"kubernetes.io/projected/8f13b08e-48a9-423c-ae8c-d1b13239074d-kube-api-access-vmxjp\") pod \"community-operators-gzh6l\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.650650 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-utilities\") pod \"certified-operators-s5sbt\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.650671 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzwzn\" (UniqueName: \"kubernetes.io/projected/2d701fdb-266c-4e83-a0b6-099bfd0987a9-kube-api-access-mzwzn\") pod \"community-operators-f5nf9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.650725 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zk56\" (UniqueName: \"kubernetes.io/projected/752a7b3b-7850-4bba-b8ce-be070452a538-kube-api-access-5zk56\") pod \"certified-operators-s5sbt\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.651893 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cgp9n"] Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.652104 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:02.152078712 +0000 UTC m=+146.321150938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.652584 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-catalog-content\") pod \"certified-operators-s5sbt\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.653290 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-catalog-content\") pod \"community-operators-f5nf9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.653555 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-utilities\") pod \"certified-operators-s5sbt\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.654013 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-utilities\") pod \"community-operators-f5nf9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.660631 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h4zc4"] Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.661677 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.711710 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzwzn\" (UniqueName: \"kubernetes.io/projected/2d701fdb-266c-4e83-a0b6-099bfd0987a9-kube-api-access-mzwzn\") pod \"community-operators-f5nf9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.715267 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zk56\" (UniqueName: \"kubernetes.io/projected/752a7b3b-7850-4bba-b8ce-be070452a538-kube-api-access-5zk56\") pod \"certified-operators-s5sbt\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.718565 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4zc4"] Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.770472 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-serving-cert\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.770551 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.770589 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmxjp\" (UniqueName: \"kubernetes.io/projected/8f13b08e-48a9-423c-ae8c-d1b13239074d-kube-api-access-vmxjp\") pod \"community-operators-gzh6l\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.770648 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxhz\" (UniqueName: \"kubernetes.io/projected/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-kube-api-access-tvxhz\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.770900 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-utilities\") pod \"certified-operators-h4zc4\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.770959 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.771011 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-catalog-content\") pod \"certified-operators-h4zc4\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.771144 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-config\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.771427 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:02.271392536 +0000 UTC m=+146.440464762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.775975 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:01 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:01 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:01 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.776043 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.789843 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79gr7\" (UniqueName: \"kubernetes.io/projected/777ae5a8-8d44-4d0d-a598-d1782fcc9585-kube-api-access-79gr7\") pod \"certified-operators-h4zc4\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.789936 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-catalog-content\") pod \"community-operators-gzh6l\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.789970 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-client-ca\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.790001 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-utilities\") pod \"community-operators-gzh6l\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.790586 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-utilities\") pod \"community-operators-gzh6l\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.790982 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-catalog-content\") pod \"community-operators-gzh6l\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.811246 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmxjp\" (UniqueName: \"kubernetes.io/projected/8f13b08e-48a9-423c-ae8c-d1b13239074d-kube-api-access-vmxjp\") pod \"community-operators-gzh6l\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.841274 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerStarted","Data":"52f68907f42dbf951b0b7da3ea2aca946d1f958f2f54d7361dfefe5480fd799b"} Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.858259 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.860707 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.864977 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-88ktq" event={"ID":"131a949e-2d37-47b4-8d7e-1f1e1afb9283","Type":"ContainerDied","Data":"95364e09f96d05bba3f7646b7da419a575c73301f522266f5ea9d82de5c37296"} Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.865034 4865 scope.go:117] "RemoveContainer" containerID="320d71b41214ea352e0e5a25a063242581a6ab1f64cad04090ad89f0897dea40" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.896027 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.896245 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxhz\" (UniqueName: \"kubernetes.io/projected/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-kube-api-access-tvxhz\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.896298 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-utilities\") pod \"certified-operators-h4zc4\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.896332 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-catalog-content\") pod \"certified-operators-h4zc4\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.896363 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-config\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.896424 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79gr7\" (UniqueName: \"kubernetes.io/projected/777ae5a8-8d44-4d0d-a598-d1782fcc9585-kube-api-access-79gr7\") pod \"certified-operators-h4zc4\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.896456 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-client-ca\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.896499 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-serving-cert\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.896517 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.897084 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-catalog-content\") pod \"certified-operators-h4zc4\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:01 crc kubenswrapper[4865]: E0123 11:55:01.897312 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:02.397286398 +0000 UTC m=+146.566358634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.897833 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-utilities\") pod \"certified-operators-h4zc4\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.898257 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-config\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.898792 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-client-ca\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.900571 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.928444 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-serving-cert\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.939110 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79gr7\" (UniqueName: \"kubernetes.io/projected/777ae5a8-8d44-4d0d-a598-d1782fcc9585-kube-api-access-79gr7\") pod \"certified-operators-h4zc4\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.947671 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88ktq"] Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.956530 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-88ktq"] Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.960322 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxhz\" (UniqueName: \"kubernetes.io/projected/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-kube-api-access-tvxhz\") pod \"controller-manager-879f6c89f-cgp9n\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.984238 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:55:01 crc kubenswrapper[4865]: I0123 11:55:01.984304 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.006335 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.007930 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.008759 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:02.508746524 +0000 UTC m=+146.677818750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.014893 4865 patch_prober.go:28] interesting pod/apiserver-76f77b778f-r8fk2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]log ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]etcd ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/max-in-flight-filter ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 11:55:02 crc kubenswrapper[4865]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/openshift.io-startinformers ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 11:55:02 crc kubenswrapper[4865]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 11:55:02 crc kubenswrapper[4865]: livez check failed Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.015425 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" podUID="51f498e1-f13f-4977-a3e3-ea8bc6b75c6f" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.107570 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.109473 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:02.609439573 +0000 UTC m=+146.778511799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.129288 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131a949e-2d37-47b4-8d7e-1f1e1afb9283" path="/var/lib/kubelet/pods/131a949e-2d37-47b4-8d7e-1f1e1afb9283/volumes" Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.219739 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.221906 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:02.721885603 +0000 UTC m=+146.890957829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.255001 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.322827 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.323271 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:02.823248227 +0000 UTC m=+146.992320453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.424536 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.424972 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:02.924959711 +0000 UTC m=+147.094031937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.529908 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.530494 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.030469504 +0000 UTC m=+147.199541720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.530797 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gzh6l"] Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.589031 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.592713 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.605747 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.631783 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.632099 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.132086316 +0000 UTC m=+147.301158542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.738335 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.738528 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.238494551 +0000 UTC m=+147.407566787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.738684 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.738993 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.238983612 +0000 UTC m=+147.408055838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.783586 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:02 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:02 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:02 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.783674 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.839521 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.840010 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.339991809 +0000 UTC m=+147.509064035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.862954 4865 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.924905 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerStarted","Data":"7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c"} Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.931038 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gzh6l" event={"ID":"8f13b08e-48a9-423c-ae8c-d1b13239074d","Type":"ContainerStarted","Data":"9124fc91e0897ed300a7fab607e8da08d1fde4dde1fe8d45dc5c4455cc9fb3f3"} Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.942169 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:02 crc kubenswrapper[4865]: E0123 11:55:02.942515 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.442502142 +0000 UTC m=+147.611574368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:02 crc kubenswrapper[4865]: I0123 11:55:02.970242 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podStartSLOduration=12.970222614 podStartE2EDuration="12.970222614s" podCreationTimestamp="2026-01-23 11:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:02.967269733 +0000 UTC m=+147.136341959" watchObservedRunningTime="2026-01-23 11:55:02.970222614 +0000 UTC m=+147.139294840" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.002881 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cgp9n"] Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.031235 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9w54z"] Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.032160 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.036884 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.048050 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:03 crc kubenswrapper[4865]: E0123 11:55:03.048909 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.548884697 +0000 UTC m=+147.717956923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.068642 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9w54z"] Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.072108 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f5nf9"] Jan 23 11:55:03 crc kubenswrapper[4865]: W0123 11:55:03.079896 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d701fdb_266c_4e83_a0b6_099bfd0987a9.slice/crio-c9024586abf232fb9a3fe5420c4d12b3e7dbe1059207d2874bb4c242b970fc5c WatchSource:0}: Error finding container c9024586abf232fb9a3fe5420c4d12b3e7dbe1059207d2874bb4c242b970fc5c: Status 404 returned error can't find the container with id c9024586abf232fb9a3fe5420c4d12b3e7dbe1059207d2874bb4c242b970fc5c Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.151356 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.151430 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.151463 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-utilities\") pod \"redhat-marketplace-9w54z\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.151480 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bndwm\" (UniqueName: \"kubernetes.io/projected/778153be-8013-460c-8000-e58ba9f45cd9-kube-api-access-bndwm\") pod \"redhat-marketplace-9w54z\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.155933 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.156012 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-catalog-content\") pod \"redhat-marketplace-9w54z\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.156057 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.156095 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:55:03 crc kubenswrapper[4865]: E0123 11:55:03.156534 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.656519321 +0000 UTC m=+147.825591547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.158635 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.168332 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.168463 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.168950 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.268255 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:03 crc kubenswrapper[4865]: E0123 11:55:03.272582 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.772552557 +0000 UTC m=+147.941624783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.272902 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.273071 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-utilities\") pod \"redhat-marketplace-9w54z\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.273199 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bndwm\" (UniqueName: \"kubernetes.io/projected/778153be-8013-460c-8000-e58ba9f45cd9-kube-api-access-bndwm\") pod \"redhat-marketplace-9w54z\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.273308 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-catalog-content\") pod \"redhat-marketplace-9w54z\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.273837 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-catalog-content\") pod \"redhat-marketplace-9w54z\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: E0123 11:55:03.274163 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.774150876 +0000 UTC m=+147.943223102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-6ph28" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.274457 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-utilities\") pod \"redhat-marketplace-9w54z\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.328323 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bndwm\" (UniqueName: \"kubernetes.io/projected/778153be-8013-460c-8000-e58ba9f45cd9-kube-api-access-bndwm\") pod \"redhat-marketplace-9w54z\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.332066 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.342988 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.349061 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.366053 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.366723 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.366800 4865 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T11:55:02.862978949Z","Handler":null,"Name":""} Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.373744 4865 patch_prober.go:28] interesting pod/console-f9d7485db-bpdjt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.373813 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bpdjt" podUID="34e16446-9445-4646-bf3b-08764f77f949" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.374534 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.374829 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:03 crc kubenswrapper[4865]: E0123 11:55:03.375867 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 11:55:03.875850968 +0000 UTC m=+148.044923194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.376936 4865 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.376957 4865 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.397011 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.397066 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.397152 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.397220 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.411169 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4zc4"] Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.475984 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.494596 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5g2xj"] Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.495517 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.521587 4865 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.521643 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.565616 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5sbt"] Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.565683 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g2xj"] Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.577041 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-catalog-content\") pod \"redhat-marketplace-5g2xj\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.577112 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-utilities\") pod \"redhat-marketplace-5g2xj\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.577151 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v6fc\" (UniqueName: \"kubernetes.io/projected/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-kube-api-access-9v6fc\") pod \"redhat-marketplace-5g2xj\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: W0123 11:55:03.623618 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod752a7b3b_7850_4bba_b8ce_be070452a538.slice/crio-78b3d1a68fc1f9f013c5828a4250fe27c4058206b9afc07c2690c4d05a0f98e0 WatchSource:0}: Error finding container 78b3d1a68fc1f9f013c5828a4250fe27c4058206b9afc07c2690c4d05a0f98e0: Status 404 returned error can't find the container with id 78b3d1a68fc1f9f013c5828a4250fe27c4058206b9afc07c2690c4d05a0f98e0 Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.678194 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-utilities\") pod \"redhat-marketplace-5g2xj\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.678581 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v6fc\" (UniqueName: \"kubernetes.io/projected/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-kube-api-access-9v6fc\") pod \"redhat-marketplace-5g2xj\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.678636 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-catalog-content\") pod \"redhat-marketplace-5g2xj\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.680671 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-catalog-content\") pod \"redhat-marketplace-5g2xj\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.682323 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-utilities\") pod \"redhat-marketplace-5g2xj\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.725459 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v6fc\" (UniqueName: \"kubernetes.io/projected/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-kube-api-access-9v6fc\") pod \"redhat-marketplace-5g2xj\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.772750 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.794902 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:03 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:03 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:03 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.795330 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.841291 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.974051 4865 generic.go:334] "Generic (PLEG): container finished" podID="b60cff5f-ff90-4d9a-9980-f2d0ebce2aed" containerID="e9c3e1560f5b66efcbb55cce9e1082cfa19890fbcae89e199e02c135ef2d6496" exitCode=0 Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.974153 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" event={"ID":"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed","Type":"ContainerDied","Data":"e9c3e1560f5b66efcbb55cce9e1082cfa19890fbcae89e199e02c135ef2d6496"} Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.985427 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" event={"ID":"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c","Type":"ContainerStarted","Data":"02a885ad1a6563b0a81d1a9175c854c67f4aecd2006602a57a757c392aff28be"} Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.985479 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" event={"ID":"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c","Type":"ContainerStarted","Data":"5a99ace1393eeecd6e024d4cbdfaaa018257976e12d5323b39c7f30762329ccd"} Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.985900 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.989790 4865 generic.go:334] "Generic (PLEG): container finished" podID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerID="02c3ac4becade0e703539b9cbabf97e2613016e3313f1b82eb72116da9e7b4d6" exitCode=0 Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.989844 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gzh6l" event={"ID":"8f13b08e-48a9-423c-ae8c-d1b13239074d","Type":"ContainerDied","Data":"02c3ac4becade0e703539b9cbabf97e2613016e3313f1b82eb72116da9e7b4d6"} Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.991791 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 11:55:03 crc kubenswrapper[4865]: I0123 11:55:03.996013 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5sbt" event={"ID":"752a7b3b-7850-4bba-b8ce-be070452a538","Type":"ContainerStarted","Data":"78b3d1a68fc1f9f013c5828a4250fe27c4058206b9afc07c2690c4d05a0f98e0"} Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.008097 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zc4" event={"ID":"777ae5a8-8d44-4d0d-a598-d1782fcc9585","Type":"ContainerStarted","Data":"abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d"} Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.008142 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zc4" event={"ID":"777ae5a8-8d44-4d0d-a598-d1782fcc9585","Type":"ContainerStarted","Data":"f833e33df596fac61e47ea585899a711c32fe2b2fd25463a4fe016f2cd98113a"} Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.015513 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-6ph28\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.026232 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerID="68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec" exitCode=0 Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.026955 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nf9" event={"ID":"2d701fdb-266c-4e83-a0b6-099bfd0987a9","Type":"ContainerDied","Data":"68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec"} Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.026989 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nf9" event={"ID":"2d701fdb-266c-4e83-a0b6-099bfd0987a9","Type":"ContainerStarted","Data":"c9024586abf232fb9a3fe5420c4d12b3e7dbe1059207d2874bb4c242b970fc5c"} Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.059126 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.091439 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.106994 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.149453 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.160284 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.209396 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.284777 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" podStartSLOduration=3.284538216 podStartE2EDuration="3.284538216s" podCreationTimestamp="2026-01-23 11:55:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:04.159921704 +0000 UTC m=+148.328993920" watchObservedRunningTime="2026-01-23 11:55:04.284538216 +0000 UTC m=+148.453610442" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.401691 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9w54z"] Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.421116 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l76kv"] Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.422069 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.433420 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.448157 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l76kv"] Jan 23 11:55:04 crc kubenswrapper[4865]: W0123 11:55:04.469005 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod778153be_8013_460c_8000_e58ba9f45cd9.slice/crio-ed9f8d60172867512a36b9d9c0d77505c413c7ac64bbfd5b1ac1e8f9c8c1d271 WatchSource:0}: Error finding container ed9f8d60172867512a36b9d9c0d77505c413c7ac64bbfd5b1ac1e8f9c8c1d271: Status 404 returned error can't find the container with id ed9f8d60172867512a36b9d9c0d77505c413c7ac64bbfd5b1ac1e8f9c8c1d271 Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.521111 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-utilities\") pod \"redhat-operators-l76kv\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.521185 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5t94\" (UniqueName: \"kubernetes.io/projected/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-kube-api-access-g5t94\") pod \"redhat-operators-l76kv\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.521215 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-catalog-content\") pod \"redhat-operators-l76kv\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.622085 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5t94\" (UniqueName: \"kubernetes.io/projected/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-kube-api-access-g5t94\") pod \"redhat-operators-l76kv\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.622143 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-catalog-content\") pod \"redhat-operators-l76kv\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.622195 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-utilities\") pod \"redhat-operators-l76kv\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.622683 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-utilities\") pod \"redhat-operators-l76kv\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.623173 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-catalog-content\") pod \"redhat-operators-l76kv\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.659584 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5t94\" (UniqueName: \"kubernetes.io/projected/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-kube-api-access-g5t94\") pod \"redhat-operators-l76kv\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.780255 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ph28"] Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.780720 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:04 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:04 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:04 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.780780 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:04 crc kubenswrapper[4865]: W0123 11:55:04.788863 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd18c2296_0938_4fef_8c63_8bd9f25c8fc3.slice/crio-18207ffb42a34f40a01f21906b84f5d4e159640642936e8e906404af67d0b464 WatchSource:0}: Error finding container 18207ffb42a34f40a01f21906b84f5d4e159640642936e8e906404af67d0b464: Status 404 returned error can't find the container with id 18207ffb42a34f40a01f21906b84f5d4e159640642936e8e906404af67d0b464 Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.820377 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xvmlx"] Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.821874 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.822974 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.843451 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvmlx"] Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.925324 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzqhz\" (UniqueName: \"kubernetes.io/projected/abe1c851-e34c-49e4-991b-cec1c55a25b6-kube-api-access-fzqhz\") pod \"redhat-operators-xvmlx\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.931657 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-utilities\") pod \"redhat-operators-xvmlx\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.931726 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-catalog-content\") pod \"redhat-operators-xvmlx\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:04 crc kubenswrapper[4865]: I0123 11:55:04.974245 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g2xj"] Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.036016 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzqhz\" (UniqueName: \"kubernetes.io/projected/abe1c851-e34c-49e4-991b-cec1c55a25b6-kube-api-access-fzqhz\") pod \"redhat-operators-xvmlx\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.036100 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-utilities\") pod \"redhat-operators-xvmlx\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.036134 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-catalog-content\") pod \"redhat-operators-xvmlx\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.036634 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-catalog-content\") pod \"redhat-operators-xvmlx\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.036759 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-utilities\") pod \"redhat-operators-xvmlx\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.095848 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzqhz\" (UniqueName: \"kubernetes.io/projected/abe1c851-e34c-49e4-991b-cec1c55a25b6-kube-api-access-fzqhz\") pod \"redhat-operators-xvmlx\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.127100 4865 generic.go:334] "Generic (PLEG): container finished" podID="778153be-8013-460c-8000-e58ba9f45cd9" containerID="58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd" exitCode=0 Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.127173 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w54z" event={"ID":"778153be-8013-460c-8000-e58ba9f45cd9","Type":"ContainerDied","Data":"58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd"} Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.127201 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w54z" event={"ID":"778153be-8013-460c-8000-e58ba9f45cd9","Type":"ContainerStarted","Data":"ed9f8d60172867512a36b9d9c0d77505c413c7ac64bbfd5b1ac1e8f9c8c1d271"} Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.148551 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g2xj" event={"ID":"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec","Type":"ContainerStarted","Data":"6bdc3633096dbf8f1c178eef2282e3d6e04e8905c5f5bb5d184241bd32f3730f"} Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.149356 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.175191 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" event={"ID":"d18c2296-0938-4fef-8c63-8bd9f25c8fc3","Type":"ContainerStarted","Data":"18207ffb42a34f40a01f21906b84f5d4e159640642936e8e906404af67d0b464"} Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.175754 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.188711 4865 generic.go:334] "Generic (PLEG): container finished" podID="752a7b3b-7850-4bba-b8ce-be070452a538" containerID="a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411" exitCode=0 Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.188814 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5sbt" event={"ID":"752a7b3b-7850-4bba-b8ce-be070452a538","Type":"ContainerDied","Data":"a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411"} Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.196445 4865 generic.go:334] "Generic (PLEG): container finished" podID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerID="abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d" exitCode=0 Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.196513 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zc4" event={"ID":"777ae5a8-8d44-4d0d-a598-d1782fcc9585","Type":"ContainerDied","Data":"abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d"} Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.214633 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l76kv"] Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.225623 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" podStartSLOduration=130.224959043 podStartE2EDuration="2m10.224959043s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:05.214876432 +0000 UTC m=+149.383948658" watchObservedRunningTime="2026-01-23 11:55:05.224959043 +0000 UTC m=+149.394031269" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.236977 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dce8c3bf417df3286d8018f8069dd3c46dd5e18cac11d2122aebdbdd93f1acd1"} Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.246774 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9e21580251a71600dc7f65d6359d8114c645fb12e906a649d7c59437d8c038e9"} Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.264758 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5ca828d34bbf2b4291d6370893e45dc8cd89d8a6ccd3986fb57784dff82783de"} Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.791232 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:05 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:05 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:05 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.792326 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.896555 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.969564 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xvmlx"] Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.978237 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-config-volume\") pod \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.978339 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-secret-volume\") pod \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.978394 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99ztv\" (UniqueName: \"kubernetes.io/projected/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-kube-api-access-99ztv\") pod \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\" (UID: \"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed\") " Jan 23 11:55:05 crc kubenswrapper[4865]: I0123 11:55:05.979935 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-config-volume" (OuterVolumeSpecName: "config-volume") pod "b60cff5f-ff90-4d9a-9980-f2d0ebce2aed" (UID: "b60cff5f-ff90-4d9a-9980-f2d0ebce2aed"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:05.996289 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b60cff5f-ff90-4d9a-9980-f2d0ebce2aed" (UID: "b60cff5f-ff90-4d9a-9980-f2d0ebce2aed"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.009781 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-kube-api-access-99ztv" (OuterVolumeSpecName: "kube-api-access-99ztv") pod "b60cff5f-ff90-4d9a-9980-f2d0ebce2aed" (UID: "b60cff5f-ff90-4d9a-9980-f2d0ebce2aed"). InnerVolumeSpecName "kube-api-access-99ztv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.080748 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.080779 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.080788 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99ztv\" (UniqueName: \"kubernetes.io/projected/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed-kube-api-access-99ztv\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.195181 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 11:55:06 crc kubenswrapper[4865]: E0123 11:55:06.195624 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60cff5f-ff90-4d9a-9980-f2d0ebce2aed" containerName="collect-profiles" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.195639 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60cff5f-ff90-4d9a-9980-f2d0ebce2aed" containerName="collect-profiles" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.195780 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60cff5f-ff90-4d9a-9980-f2d0ebce2aed" containerName="collect-profiles" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.196333 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.199548 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.199667 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.203362 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.295032 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"abb2c23f-cad6-4c95-bf25-78096bdf4e21\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.295235 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"abb2c23f-cad6-4c95-bf25-78096bdf4e21\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.323544 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvmlx" event={"ID":"abe1c851-e34c-49e4-991b-cec1c55a25b6","Type":"ContainerStarted","Data":"63a811f6410c07885d1cd405663b6d4cbd93641f1e42e6b55da8e17a0abc1618"} Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.328518 4865 generic.go:334] "Generic (PLEG): container finished" podID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerID="c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1" exitCode=0 Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.328621 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l76kv" event={"ID":"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9","Type":"ContainerDied","Data":"c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1"} Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.328660 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l76kv" event={"ID":"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9","Type":"ContainerStarted","Data":"ede125b937d3d5f20bdbe3fa666380f2293bd8109591742ec052a22c7c13092d"} Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.336068 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ba563d0ab2400e156fa922bd524253ac74afc5313e6ffe890c4fd920c675eff0"} Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.336361 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.344564 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c4e495bc4f02e5e6d2634c189ac8c77f3213298eda52a0504fc10894ea1417c2"} Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.356089 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7688ab4d47ee6cb633128e2bb676057ddde758fd24b9cac07192a290422a3348"} Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.373179 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" event={"ID":"b60cff5f-ff90-4d9a-9980-f2d0ebce2aed","Type":"ContainerDied","Data":"c671602e3c5fd0e21517838e04ce0a61347597db9991193b05279d7403441f69"} Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.373240 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c671602e3c5fd0e21517838e04ce0a61347597db9991193b05279d7403441f69" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.373315 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.408843 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"abb2c23f-cad6-4c95-bf25-78096bdf4e21\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.409010 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"abb2c23f-cad6-4c95-bf25-78096bdf4e21\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.408996 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"abb2c23f-cad6-4c95-bf25-78096bdf4e21\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.418882 4865 generic.go:334] "Generic (PLEG): container finished" podID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerID="6160ccb531d53df90e88749b3557d63e1d1ded7a7c27bcf59aecd158a8bba92e" exitCode=0 Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.419003 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g2xj" event={"ID":"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec","Type":"ContainerDied","Data":"6160ccb531d53df90e88749b3557d63e1d1ded7a7c27bcf59aecd158a8bba92e"} Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.438786 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" event={"ID":"d18c2296-0938-4fef-8c63-8bd9f25c8fc3","Type":"ContainerStarted","Data":"b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d"} Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.443769 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"abb2c23f-cad6-4c95-bf25-78096bdf4e21\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.566104 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.775834 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:06 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:06 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:06 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.776183 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.906039 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 11:55:06 crc kubenswrapper[4865]: I0123 11:55:06.994508 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:55:07 crc kubenswrapper[4865]: I0123 11:55:07.008733 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" Jan 23 11:55:07 crc kubenswrapper[4865]: I0123 11:55:07.488665 4865 generic.go:334] "Generic (PLEG): container finished" podID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerID="c651ee5a776d2358c9284b638628ef0105b705cbb421bbf3962b3ffa0a6e5182" exitCode=0 Jan 23 11:55:07 crc kubenswrapper[4865]: I0123 11:55:07.488898 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvmlx" event={"ID":"abe1c851-e34c-49e4-991b-cec1c55a25b6","Type":"ContainerDied","Data":"c651ee5a776d2358c9284b638628ef0105b705cbb421bbf3962b3ffa0a6e5182"} Jan 23 11:55:07 crc kubenswrapper[4865]: I0123 11:55:07.503712 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"abb2c23f-cad6-4c95-bf25-78096bdf4e21","Type":"ContainerStarted","Data":"e105a45e76d157cc61ea586acbd994562549737a32bba552ff678fe462e6493a"} Jan 23 11:55:07 crc kubenswrapper[4865]: I0123 11:55:07.783170 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:07 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:07 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:07 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:07 crc kubenswrapper[4865]: I0123 11:55:07.783244 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:07 crc kubenswrapper[4865]: I0123 11:55:07.967027 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 11:55:08 crc kubenswrapper[4865]: I0123 11:55:08.675467 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-z28b7" Jan 23 11:55:08 crc kubenswrapper[4865]: I0123 11:55:08.779749 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:08 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:08 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:08 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:08 crc kubenswrapper[4865]: I0123 11:55:08.779814 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:09 crc kubenswrapper[4865]: I0123 11:55:09.556786 4865 generic.go:334] "Generic (PLEG): container finished" podID="abb2c23f-cad6-4c95-bf25-78096bdf4e21" containerID="09b917a9a83194ae9d204641fc6dce62c21a457182f1ff92bc42515ad2cdaea1" exitCode=0 Jan 23 11:55:09 crc kubenswrapper[4865]: I0123 11:55:09.556865 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"abb2c23f-cad6-4c95-bf25-78096bdf4e21","Type":"ContainerDied","Data":"09b917a9a83194ae9d204641fc6dce62c21a457182f1ff92bc42515ad2cdaea1"} Jan 23 11:55:09 crc kubenswrapper[4865]: I0123 11:55:09.774155 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:09 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:09 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:09 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:09 crc kubenswrapper[4865]: I0123 11:55:09.774242 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.448008 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.451994 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.456280 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.463082 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.466475 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.546569 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c908d242-e689-411d-bd85-36852fb41bfa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c908d242-e689-411d-bd85-36852fb41bfa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.546708 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c908d242-e689-411d-bd85-36852fb41bfa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c908d242-e689-411d-bd85-36852fb41bfa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.648726 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c908d242-e689-411d-bd85-36852fb41bfa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c908d242-e689-411d-bd85-36852fb41bfa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.648954 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c908d242-e689-411d-bd85-36852fb41bfa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"c908d242-e689-411d-bd85-36852fb41bfa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.650155 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c908d242-e689-411d-bd85-36852fb41bfa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c908d242-e689-411d-bd85-36852fb41bfa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.693573 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c908d242-e689-411d-bd85-36852fb41bfa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"c908d242-e689-411d-bd85-36852fb41bfa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.776558 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:10 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:10 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:10 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.776650 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:10 crc kubenswrapper[4865]: I0123 11:55:10.802363 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.174553 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.274357 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kubelet-dir\") pod \"abb2c23f-cad6-4c95-bf25-78096bdf4e21\" (UID: \"abb2c23f-cad6-4c95-bf25-78096bdf4e21\") " Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.274506 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kube-api-access\") pod \"abb2c23f-cad6-4c95-bf25-78096bdf4e21\" (UID: \"abb2c23f-cad6-4c95-bf25-78096bdf4e21\") " Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.274534 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "abb2c23f-cad6-4c95-bf25-78096bdf4e21" (UID: "abb2c23f-cad6-4c95-bf25-78096bdf4e21"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.274855 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.282041 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "abb2c23f-cad6-4c95-bf25-78096bdf4e21" (UID: "abb2c23f-cad6-4c95-bf25-78096bdf4e21"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.404644 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.431391 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abb2c23f-cad6-4c95-bf25-78096bdf4e21-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:11 crc kubenswrapper[4865]: W0123 11:55:11.486458 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc908d242_e689_411d_bd85_36852fb41bfa.slice/crio-3bc9a470b278b648c4d99af69af6ddee7fb0e4fb2c37b960192aa120bc9772e9 WatchSource:0}: Error finding container 3bc9a470b278b648c4d99af69af6ddee7fb0e4fb2c37b960192aa120bc9772e9: Status 404 returned error can't find the container with id 3bc9a470b278b648c4d99af69af6ddee7fb0e4fb2c37b960192aa120bc9772e9 Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.612840 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"abb2c23f-cad6-4c95-bf25-78096bdf4e21","Type":"ContainerDied","Data":"e105a45e76d157cc61ea586acbd994562549737a32bba552ff678fe462e6493a"} Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.612898 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e105a45e76d157cc61ea586acbd994562549737a32bba552ff678fe462e6493a" Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.612991 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.622007 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c908d242-e689-411d-bd85-36852fb41bfa","Type":"ContainerStarted","Data":"3bc9a470b278b648c4d99af69af6ddee7fb0e4fb2c37b960192aa120bc9772e9"} Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.778423 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:11 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:11 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:11 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:11 crc kubenswrapper[4865]: I0123 11:55:11.779184 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:12 crc kubenswrapper[4865]: I0123 11:55:12.783060 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:12 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:12 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:12 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:12 crc kubenswrapper[4865]: I0123 11:55:12.783111 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:13 crc kubenswrapper[4865]: I0123 11:55:13.366307 4865 patch_prober.go:28] interesting pod/console-f9d7485db-bpdjt container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 23 11:55:13 crc kubenswrapper[4865]: I0123 11:55:13.367157 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bpdjt" podUID="34e16446-9445-4646-bf3b-08764f77f949" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 23 11:55:13 crc kubenswrapper[4865]: I0123 11:55:13.396762 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 11:55:13 crc kubenswrapper[4865]: I0123 11:55:13.700931 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c908d242-e689-411d-bd85-36852fb41bfa","Type":"ContainerStarted","Data":"d7c8ddf684039947d384e47b451a0eede303829240e76158b9853e793b15da6a"} Jan 23 11:55:13 crc kubenswrapper[4865]: I0123 11:55:13.734668 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.734646822 podStartE2EDuration="3.734646822s" podCreationTimestamp="2026-01-23 11:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:13.728278209 +0000 UTC m=+157.897350425" watchObservedRunningTime="2026-01-23 11:55:13.734646822 +0000 UTC m=+157.903719048" Jan 23 11:55:13 crc kubenswrapper[4865]: I0123 11:55:13.782475 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:13 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:13 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:13 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:13 crc kubenswrapper[4865]: I0123 11:55:13.782547 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:14 crc kubenswrapper[4865]: I0123 11:55:14.744706 4865 generic.go:334] "Generic (PLEG): container finished" podID="c908d242-e689-411d-bd85-36852fb41bfa" containerID="d7c8ddf684039947d384e47b451a0eede303829240e76158b9853e793b15da6a" exitCode=0 Jan 23 11:55:14 crc kubenswrapper[4865]: I0123 11:55:14.744772 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c908d242-e689-411d-bd85-36852fb41bfa","Type":"ContainerDied","Data":"d7c8ddf684039947d384e47b451a0eede303829240e76158b9853e793b15da6a"} Jan 23 11:55:14 crc kubenswrapper[4865]: I0123 11:55:14.786987 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:14 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:14 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:14 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:14 crc kubenswrapper[4865]: I0123 11:55:14.787066 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:15 crc kubenswrapper[4865]: I0123 11:55:15.774957 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 11:55:15 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 11:55:15 crc kubenswrapper[4865]: [+]process-running ok Jan 23 11:55:15 crc kubenswrapper[4865]: healthz check failed Jan 23 11:55:15 crc kubenswrapper[4865]: I0123 11:55:15.775487 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.389770 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.572013 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c908d242-e689-411d-bd85-36852fb41bfa-kube-api-access\") pod \"c908d242-e689-411d-bd85-36852fb41bfa\" (UID: \"c908d242-e689-411d-bd85-36852fb41bfa\") " Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.572092 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c908d242-e689-411d-bd85-36852fb41bfa-kubelet-dir\") pod \"c908d242-e689-411d-bd85-36852fb41bfa\" (UID: \"c908d242-e689-411d-bd85-36852fb41bfa\") " Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.572399 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c908d242-e689-411d-bd85-36852fb41bfa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c908d242-e689-411d-bd85-36852fb41bfa" (UID: "c908d242-e689-411d-bd85-36852fb41bfa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.577682 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c908d242-e689-411d-bd85-36852fb41bfa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c908d242-e689-411d-bd85-36852fb41bfa" (UID: "c908d242-e689-411d-bd85-36852fb41bfa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.674012 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c908d242-e689-411d-bd85-36852fb41bfa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.674047 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c908d242-e689-411d-bd85-36852fb41bfa-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.776146 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.778720 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.808920 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.811147 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"c908d242-e689-411d-bd85-36852fb41bfa","Type":"ContainerDied","Data":"3bc9a470b278b648c4d99af69af6ddee7fb0e4fb2c37b960192aa120bc9772e9"} Jan 23 11:55:16 crc kubenswrapper[4865]: I0123 11:55:16.811207 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc9a470b278b648c4d99af69af6ddee7fb0e4fb2c37b960192aa120bc9772e9" Jan 23 11:55:17 crc kubenswrapper[4865]: I0123 11:55:17.692071 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:55:17 crc kubenswrapper[4865]: I0123 11:55:17.699110 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a15fb93f-eb63-4a8c-bec6-20bed7300dca-metrics-certs\") pod \"network-metrics-daemon-n76rp\" (UID: \"a15fb93f-eb63-4a8c-bec6-20bed7300dca\") " pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:55:17 crc kubenswrapper[4865]: I0123 11:55:17.839149 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-n76rp" Jan 23 11:55:18 crc kubenswrapper[4865]: I0123 11:55:18.776802 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 11:55:18 crc kubenswrapper[4865]: I0123 11:55:18.777195 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 11:55:19 crc kubenswrapper[4865]: I0123 11:55:19.118039 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cgp9n"] Jan 23 11:55:19 crc kubenswrapper[4865]: I0123 11:55:19.118331 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" podUID="eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" containerName="controller-manager" containerID="cri-o://02a885ad1a6563b0a81d1a9175c854c67f4aecd2006602a57a757c392aff28be" gracePeriod=30 Jan 23 11:55:19 crc kubenswrapper[4865]: I0123 11:55:19.128222 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc"] Jan 23 11:55:19 crc kubenswrapper[4865]: I0123 11:55:19.128470 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" podUID="d5137707-0cd3-4f39-9d73-3401d315e827" containerName="route-controller-manager" containerID="cri-o://596ee86f5ece4078ac6a90e14c52b12d7b428a8a8b1967199d42d72f90b8924b" gracePeriod=30 Jan 23 11:55:19 crc kubenswrapper[4865]: I0123 11:55:19.887778 4865 generic.go:334] "Generic (PLEG): container finished" podID="d5137707-0cd3-4f39-9d73-3401d315e827" containerID="596ee86f5ece4078ac6a90e14c52b12d7b428a8a8b1967199d42d72f90b8924b" exitCode=0 Jan 23 11:55:19 crc kubenswrapper[4865]: I0123 11:55:19.887862 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" event={"ID":"d5137707-0cd3-4f39-9d73-3401d315e827","Type":"ContainerDied","Data":"596ee86f5ece4078ac6a90e14c52b12d7b428a8a8b1967199d42d72f90b8924b"} Jan 23 11:55:19 crc kubenswrapper[4865]: I0123 11:55:19.902133 4865 generic.go:334] "Generic (PLEG): container finished" podID="eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" containerID="02a885ad1a6563b0a81d1a9175c854c67f4aecd2006602a57a757c392aff28be" exitCode=0 Jan 23 11:55:19 crc kubenswrapper[4865]: I0123 11:55:19.902180 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" event={"ID":"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c","Type":"ContainerDied","Data":"02a885ad1a6563b0a81d1a9175c854c67f4aecd2006602a57a757c392aff28be"} Jan 23 11:55:21 crc kubenswrapper[4865]: I0123 11:55:21.717938 4865 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7ddsc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 23 11:55:21 crc kubenswrapper[4865]: I0123 11:55:21.717991 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" podUID="d5137707-0cd3-4f39-9d73-3401d315e827" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 23 11:55:22 crc kubenswrapper[4865]: I0123 11:55:22.257272 4865 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cgp9n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" start-of-body= Jan 23 11:55:22 crc kubenswrapper[4865]: I0123 11:55:22.257680 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" podUID="eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" Jan 23 11:55:23 crc kubenswrapper[4865]: I0123 11:55:23.369585 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:55:23 crc kubenswrapper[4865]: I0123 11:55:23.372956 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 11:55:24 crc kubenswrapper[4865]: I0123 11:55:24.218708 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:55:31 crc kubenswrapper[4865]: I0123 11:55:31.718172 4865 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7ddsc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 23 11:55:31 crc kubenswrapper[4865]: I0123 11:55:31.719268 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" podUID="d5137707-0cd3-4f39-9d73-3401d315e827" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 23 11:55:32 crc kubenswrapper[4865]: I0123 11:55:32.256918 4865 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cgp9n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" start-of-body= Jan 23 11:55:32 crc kubenswrapper[4865]: I0123 11:55:32.257013 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" podUID="eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" Jan 23 11:55:34 crc kubenswrapper[4865]: I0123 11:55:34.172865 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.132200 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" event={"ID":"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c","Type":"ContainerDied","Data":"5a99ace1393eeecd6e024d4cbdfaaa018257976e12d5323b39c7f30762329ccd"} Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.133359 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a99ace1393eeecd6e024d4cbdfaaa018257976e12d5323b39c7f30762329ccd" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.151555 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.210823 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-655bbcfcff-459vj"] Jan 23 11:55:40 crc kubenswrapper[4865]: E0123 11:55:40.211422 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb2c23f-cad6-4c95-bf25-78096bdf4e21" containerName="pruner" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.211437 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb2c23f-cad6-4c95-bf25-78096bdf4e21" containerName="pruner" Jan 23 11:55:40 crc kubenswrapper[4865]: E0123 11:55:40.211455 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c908d242-e689-411d-bd85-36852fb41bfa" containerName="pruner" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.211462 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c908d242-e689-411d-bd85-36852fb41bfa" containerName="pruner" Jan 23 11:55:40 crc kubenswrapper[4865]: E0123 11:55:40.211483 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" containerName="controller-manager" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.211492 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" containerName="controller-manager" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.212052 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c908d242-e689-411d-bd85-36852fb41bfa" containerName="pruner" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.212453 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" containerName="controller-manager" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.212478 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb2c23f-cad6-4c95-bf25-78096bdf4e21" containerName="pruner" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.214288 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.231823 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-655bbcfcff-459vj"] Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.254063 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-config\") pod \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.254181 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-client-ca\") pod \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.254226 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvxhz\" (UniqueName: \"kubernetes.io/projected/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-kube-api-access-tvxhz\") pod \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.254291 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-serving-cert\") pod \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.254316 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-proxy-ca-bundles\") pod \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\" (UID: \"eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c\") " Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.255917 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" (UID: "eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.256559 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-client-ca" (OuterVolumeSpecName: "client-ca") pod "eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" (UID: "eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.262754 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" (UID: "eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.264939 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-config" (OuterVolumeSpecName: "config") pod "eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" (UID: "eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.266886 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-kube-api-access-tvxhz" (OuterVolumeSpecName: "kube-api-access-tvxhz") pod "eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" (UID: "eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c"). InnerVolumeSpecName "kube-api-access-tvxhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.355407 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-client-ca\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.355491 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-config\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.355511 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/729ee089-9af6-481a-bd47-e2a4105995bd-serving-cert\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.355535 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-proxy-ca-bundles\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.355771 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh96l\" (UniqueName: \"kubernetes.io/projected/729ee089-9af6-481a-bd47-e2a4105995bd-kube-api-access-jh96l\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.356114 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.356145 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.356164 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.356178 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.356190 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvxhz\" (UniqueName: \"kubernetes.io/projected/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c-kube-api-access-tvxhz\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.457240 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-proxy-ca-bundles\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.457298 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh96l\" (UniqueName: \"kubernetes.io/projected/729ee089-9af6-481a-bd47-e2a4105995bd-kube-api-access-jh96l\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.457364 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-client-ca\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.457407 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-config\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.457425 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/729ee089-9af6-481a-bd47-e2a4105995bd-serving-cert\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.459125 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-client-ca\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.459451 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-config\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.459892 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-proxy-ca-bundles\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.464764 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/729ee089-9af6-481a-bd47-e2a4105995bd-serving-cert\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.473444 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh96l\" (UniqueName: \"kubernetes.io/projected/729ee089-9af6-481a-bd47-e2a4105995bd-kube-api-access-jh96l\") pod \"controller-manager-655bbcfcff-459vj\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:40 crc kubenswrapper[4865]: I0123 11:55:40.582885 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:41 crc kubenswrapper[4865]: I0123 11:55:41.140325 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cgp9n" Jan 23 11:55:41 crc kubenswrapper[4865]: I0123 11:55:41.167630 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cgp9n"] Jan 23 11:55:41 crc kubenswrapper[4865]: I0123 11:55:41.171428 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cgp9n"] Jan 23 11:55:42 crc kubenswrapper[4865]: I0123 11:55:42.124065 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c" path="/var/lib/kubelet/pods/eff5f9fb-b39d-4f5e-813f-d813f5d0bf7c/volumes" Jan 23 11:55:42 crc kubenswrapper[4865]: I0123 11:55:42.718787 4865 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7ddsc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 11:55:42 crc kubenswrapper[4865]: I0123 11:55:42.718875 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" podUID="d5137707-0cd3-4f39-9d73-3401d315e827" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 11:55:43 crc kubenswrapper[4865]: I0123 11:55:43.347928 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 11:55:44 crc kubenswrapper[4865]: I0123 11:55:44.892895 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 11:55:44 crc kubenswrapper[4865]: I0123 11:55:44.894200 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 11:55:44 crc kubenswrapper[4865]: I0123 11:55:44.898449 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 11:55:44 crc kubenswrapper[4865]: I0123 11:55:44.898877 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 11:55:44 crc kubenswrapper[4865]: I0123 11:55:44.904349 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 11:55:45 crc kubenswrapper[4865]: I0123 11:55:45.047071 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/801cf885-df69-4405-b33e-b31f8d4d7fdf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"801cf885-df69-4405-b33e-b31f8d4d7fdf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 11:55:45 crc kubenswrapper[4865]: I0123 11:55:45.047159 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/801cf885-df69-4405-b33e-b31f8d4d7fdf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"801cf885-df69-4405-b33e-b31f8d4d7fdf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 11:55:45 crc kubenswrapper[4865]: I0123 11:55:45.070958 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gk4fh"] Jan 23 11:55:45 crc kubenswrapper[4865]: I0123 11:55:45.148551 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/801cf885-df69-4405-b33e-b31f8d4d7fdf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"801cf885-df69-4405-b33e-b31f8d4d7fdf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 11:55:45 crc kubenswrapper[4865]: I0123 11:55:45.148715 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/801cf885-df69-4405-b33e-b31f8d4d7fdf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"801cf885-df69-4405-b33e-b31f8d4d7fdf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 11:55:45 crc kubenswrapper[4865]: I0123 11:55:45.148880 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/801cf885-df69-4405-b33e-b31f8d4d7fdf-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"801cf885-df69-4405-b33e-b31f8d4d7fdf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 11:55:45 crc kubenswrapper[4865]: I0123 11:55:45.182975 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/801cf885-df69-4405-b33e-b31f8d4d7fdf-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"801cf885-df69-4405-b33e-b31f8d4d7fdf\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 11:55:45 crc kubenswrapper[4865]: I0123 11:55:45.210947 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 11:55:46 crc kubenswrapper[4865]: E0123 11:55:46.682944 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 11:55:46 crc kubenswrapper[4865]: E0123 11:55:46.683159 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g5t94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-l76kv_openshift-marketplace(e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 11:55:46 crc kubenswrapper[4865]: E0123 11:55:46.684630 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-l76kv" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" Jan 23 11:55:48 crc kubenswrapper[4865]: I0123 11:55:48.776908 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 11:55:48 crc kubenswrapper[4865]: I0123 11:55:48.777192 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.025444 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.026201 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.041115 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.108993 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kube-api-access\") pod \"installer-9-crc\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.109117 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kubelet-dir\") pod \"installer-9-crc\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.109139 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-var-lock\") pod \"installer-9-crc\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:49 crc kubenswrapper[4865]: E0123 11:55:49.181709 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-l76kv" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.210196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kubelet-dir\") pod \"installer-9-crc\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.210252 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-var-lock\") pod \"installer-9-crc\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.210290 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kube-api-access\") pod \"installer-9-crc\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.210616 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-var-lock\") pod \"installer-9-crc\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.210617 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kubelet-dir\") pod \"installer-9-crc\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:49 crc kubenswrapper[4865]: E0123 11:55:49.245585 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 11:55:49 crc kubenswrapper[4865]: E0123 11:55:49.245789 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmxjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gzh6l_openshift-marketplace(8f13b08e-48a9-423c-ae8c-d1b13239074d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 11:55:49 crc kubenswrapper[4865]: E0123 11:55:49.247021 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gzh6l" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.253619 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kube-api-access\") pod \"installer-9-crc\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:49 crc kubenswrapper[4865]: I0123 11:55:49.354584 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.101911 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gzh6l" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.172466 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.232743 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb"] Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.233051 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5137707-0cd3-4f39-9d73-3401d315e827" containerName="route-controller-manager" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.233064 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5137707-0cd3-4f39-9d73-3401d315e827" containerName="route-controller-manager" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.233177 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5137707-0cd3-4f39-9d73-3401d315e827" containerName="route-controller-manager" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.236791 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.254858 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb"] Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.258937 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.259106 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79gr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-h4zc4_openshift-marketplace(777ae5a8-8d44-4d0d-a598-d1782fcc9585): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.260185 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-h4zc4" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.260275 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.260860 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" event={"ID":"d5137707-0cd3-4f39-9d73-3401d315e827","Type":"ContainerDied","Data":"a085f89fa9a4377029e49419bcb2192063bd2fa07761cec9dc4daaeb121ac06e"} Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.260899 4865 scope.go:117] "RemoveContainer" containerID="596ee86f5ece4078ac6a90e14c52b12d7b428a8a8b1967199d42d72f90b8924b" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.261691 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5137707-0cd3-4f39-9d73-3401d315e827-serving-cert\") pod \"d5137707-0cd3-4f39-9d73-3401d315e827\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.262863 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-config\") pod \"d5137707-0cd3-4f39-9d73-3401d315e827\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.263177 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-976w6\" (UniqueName: \"kubernetes.io/projected/d5137707-0cd3-4f39-9d73-3401d315e827-kube-api-access-976w6\") pod \"d5137707-0cd3-4f39-9d73-3401d315e827\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.263231 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-client-ca\") pod \"d5137707-0cd3-4f39-9d73-3401d315e827\" (UID: \"d5137707-0cd3-4f39-9d73-3401d315e827\") " Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.265541 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-client-ca" (OuterVolumeSpecName: "client-ca") pod "d5137707-0cd3-4f39-9d73-3401d315e827" (UID: "d5137707-0cd3-4f39-9d73-3401d315e827"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.268477 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-config" (OuterVolumeSpecName: "config") pod "d5137707-0cd3-4f39-9d73-3401d315e827" (UID: "d5137707-0cd3-4f39-9d73-3401d315e827"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.285456 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5137707-0cd3-4f39-9d73-3401d315e827-kube-api-access-976w6" (OuterVolumeSpecName: "kube-api-access-976w6") pod "d5137707-0cd3-4f39-9d73-3401d315e827" (UID: "d5137707-0cd3-4f39-9d73-3401d315e827"). InnerVolumeSpecName "kube-api-access-976w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.290091 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5137707-0cd3-4f39-9d73-3401d315e827-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d5137707-0cd3-4f39-9d73-3401d315e827" (UID: "d5137707-0cd3-4f39-9d73-3401d315e827"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.338905 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.339110 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zk56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-s5sbt_openshift-marketplace(752a7b3b-7850-4bba-b8ce-be070452a538): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.341084 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-s5sbt" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.365360 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p74jl\" (UniqueName: \"kubernetes.io/projected/a0f53229-487d-420f-9323-5ec48b97f717-kube-api-access-p74jl\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.365407 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-client-ca\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.365462 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f53229-487d-420f-9323-5ec48b97f717-serving-cert\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.365484 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-config\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.365553 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.365569 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d5137707-0cd3-4f39-9d73-3401d315e827-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.365580 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5137707-0cd3-4f39-9d73-3401d315e827-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.365591 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-976w6\" (UniqueName: \"kubernetes.io/projected/d5137707-0cd3-4f39-9d73-3401d315e827-kube-api-access-976w6\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.467595 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p74jl\" (UniqueName: \"kubernetes.io/projected/a0f53229-487d-420f-9323-5ec48b97f717-kube-api-access-p74jl\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.467717 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-client-ca\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.467801 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f53229-487d-420f-9323-5ec48b97f717-serving-cert\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.467840 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-config\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.470428 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-config\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.471793 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-client-ca\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.481400 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f53229-487d-420f-9323-5ec48b97f717-serving-cert\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.490558 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p74jl\" (UniqueName: \"kubernetes.io/projected/a0f53229-487d-420f-9323-5ec48b97f717-kube-api-access-p74jl\") pod \"route-controller-manager-795f4b9585-vg6fb\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.558830 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.559011 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzwzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-f5nf9_openshift-marketplace(2d701fdb-266c-4e83-a0b6-099bfd0987a9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.560418 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-f5nf9" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.588189 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.593572 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc"] Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.595992 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc"] Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.721922 4865 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7ddsc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: i/o timeout" start-of-body= Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.721991 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7ddsc" podUID="d5137707-0cd3-4f39-9d73-3401d315e827" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: i/o timeout" Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.754344 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.757720 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-n76rp"] Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.800722 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb"] Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.853084 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 11:55:52 crc kubenswrapper[4865]: I0123 11:55:52.867097 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-655bbcfcff-459vj"] Jan 23 11:55:52 crc kubenswrapper[4865]: W0123 11:55:52.884840 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0f53229_487d_420f_9323_5ec48b97f717.slice/crio-fea520d7e597e8f8fe2a9cd89f4b24398925e512974ec390640229bf4afad3bc WatchSource:0}: Error finding container fea520d7e597e8f8fe2a9cd89f4b24398925e512974ec390640229bf4afad3bc: Status 404 returned error can't find the container with id fea520d7e597e8f8fe2a9cd89f4b24398925e512974ec390640229bf4afad3bc Jan 23 11:55:52 crc kubenswrapper[4865]: W0123 11:55:52.895327 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod729ee089_9af6_481a_bd47_e2a4105995bd.slice/crio-57cf033858bbb15409621e0859f21d6c6f7dab4de8a4cf94427bb694c0e57b8d WatchSource:0}: Error finding container 57cf033858bbb15409621e0859f21d6c6f7dab4de8a4cf94427bb694c0e57b8d: Status 404 returned error can't find the container with id 57cf033858bbb15409621e0859f21d6c6f7dab4de8a4cf94427bb694c0e57b8d Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.969548 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.970252 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bndwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9w54z_openshift-marketplace(778153be-8013-460c-8000-e58ba9f45cd9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 11:55:52 crc kubenswrapper[4865]: E0123 11:55:52.973305 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9w54z" podUID="778153be-8013-460c-8000-e58ba9f45cd9" Jan 23 11:55:53 crc kubenswrapper[4865]: E0123 11:55:53.134772 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 11:55:53 crc kubenswrapper[4865]: E0123 11:55:53.134956 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzqhz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xvmlx_openshift-marketplace(abe1c851-e34c-49e4-991b-cec1c55a25b6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 11:55:53 crc kubenswrapper[4865]: E0123 11:55:53.136379 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xvmlx" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.287720 4865 generic.go:334] "Generic (PLEG): container finished" podID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerID="245c2190d4552b0978a570b1ef4495204fa712e61f572f3bf405cf415bebddab" exitCode=0 Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.288515 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g2xj" event={"ID":"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec","Type":"ContainerDied","Data":"245c2190d4552b0978a570b1ef4495204fa712e61f572f3bf405cf415bebddab"} Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.301041 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"886599e1-f8ad-4ffd-9b2b-7db39dec28ee","Type":"ContainerStarted","Data":"8761873b074db20c12bac1a971f534b5645bfb6a99867d4c2ec5993d665bd747"} Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.312417 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" event={"ID":"a0f53229-487d-420f-9323-5ec48b97f717","Type":"ContainerStarted","Data":"eed77cbfa9538e096bf9e4d5cc5d75228a65f5f4243c6db217280d33704c81fe"} Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.312557 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" event={"ID":"a0f53229-487d-420f-9323-5ec48b97f717","Type":"ContainerStarted","Data":"fea520d7e597e8f8fe2a9cd89f4b24398925e512974ec390640229bf4afad3bc"} Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.313724 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.332766 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n76rp" event={"ID":"a15fb93f-eb63-4a8c-bec6-20bed7300dca","Type":"ContainerStarted","Data":"5864c129856b58705e0d3e0a6beff0038dd30713541b5234c5476070a3c082bf"} Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.333005 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n76rp" event={"ID":"a15fb93f-eb63-4a8c-bec6-20bed7300dca","Type":"ContainerStarted","Data":"d55b43d9765bc4ba0e8b48db3b856b58df357e10e005f24111f2f7427597e8da"} Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.340870 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" event={"ID":"729ee089-9af6-481a-bd47-e2a4105995bd","Type":"ContainerStarted","Data":"909fbb9e607b1637f02e76ef6ff813e722415175cc60c7bdef545d7b8bd4837c"} Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.340916 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" event={"ID":"729ee089-9af6-481a-bd47-e2a4105995bd","Type":"ContainerStarted","Data":"57cf033858bbb15409621e0859f21d6c6f7dab4de8a4cf94427bb694c0e57b8d"} Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.341494 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.345236 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" podStartSLOduration=14.345224374 podStartE2EDuration="14.345224374s" podCreationTimestamp="2026-01-23 11:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:53.34334785 +0000 UTC m=+197.512420076" watchObservedRunningTime="2026-01-23 11:55:53.345224374 +0000 UTC m=+197.514296600" Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.358224 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"801cf885-df69-4405-b33e-b31f8d4d7fdf","Type":"ContainerStarted","Data":"dfcb4656c079a79f3326d39a7fef68067cd5dae80468893437c9d2160b31670b"} Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.359672 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:55:53 crc kubenswrapper[4865]: E0123 11:55:53.359798 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-s5sbt" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" Jan 23 11:55:53 crc kubenswrapper[4865]: E0123 11:55:53.360851 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-f5nf9" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" Jan 23 11:55:53 crc kubenswrapper[4865]: E0123 11:55:53.361424 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-h4zc4" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" Jan 23 11:55:53 crc kubenswrapper[4865]: E0123 11:55:53.361543 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xvmlx" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" Jan 23 11:55:53 crc kubenswrapper[4865]: E0123 11:55:53.362390 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9w54z" podUID="778153be-8013-460c-8000-e58ba9f45cd9" Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.374185 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" podStartSLOduration=14.374170087 podStartE2EDuration="14.374170087s" podCreationTimestamp="2026-01-23 11:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:53.373391638 +0000 UTC m=+197.542463864" watchObservedRunningTime="2026-01-23 11:55:53.374170087 +0000 UTC m=+197.543242313" Jan 23 11:55:53 crc kubenswrapper[4865]: I0123 11:55:53.773527 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:55:54 crc kubenswrapper[4865]: I0123 11:55:54.126680 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5137707-0cd3-4f39-9d73-3401d315e827" path="/var/lib/kubelet/pods/d5137707-0cd3-4f39-9d73-3401d315e827/volumes" Jan 23 11:55:54 crc kubenswrapper[4865]: I0123 11:55:54.365205 4865 generic.go:334] "Generic (PLEG): container finished" podID="801cf885-df69-4405-b33e-b31f8d4d7fdf" containerID="148777f1e7df589b9bf08376f39ae35f5a64f6be5f2a74b15cd74567eed87613" exitCode=0 Jan 23 11:55:54 crc kubenswrapper[4865]: I0123 11:55:54.365273 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"801cf885-df69-4405-b33e-b31f8d4d7fdf","Type":"ContainerDied","Data":"148777f1e7df589b9bf08376f39ae35f5a64f6be5f2a74b15cd74567eed87613"} Jan 23 11:55:54 crc kubenswrapper[4865]: I0123 11:55:54.368632 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g2xj" event={"ID":"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec","Type":"ContainerStarted","Data":"8e7737ae16de3146d8c4ee9d55e25526a7897bc2af4e0c736fa876af9fc9283f"} Jan 23 11:55:54 crc kubenswrapper[4865]: I0123 11:55:54.371453 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"886599e1-f8ad-4ffd-9b2b-7db39dec28ee","Type":"ContainerStarted","Data":"27c7474ed12182ee2eb1a65ca6f595ad4fb920e3b5783b320b6589c58a0ab957"} Jan 23 11:55:54 crc kubenswrapper[4865]: I0123 11:55:54.373361 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-n76rp" event={"ID":"a15fb93f-eb63-4a8c-bec6-20bed7300dca","Type":"ContainerStarted","Data":"7a55e6f0d0ecb26fcec2600f236dcc1a8ae7a5f07834c00f021d2084198d9447"} Jan 23 11:55:54 crc kubenswrapper[4865]: I0123 11:55:54.429257 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-n76rp" podStartSLOduration=179.429237476 podStartE2EDuration="2m59.429237476s" podCreationTimestamp="2026-01-23 11:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:54.408029579 +0000 UTC m=+198.577101805" watchObservedRunningTime="2026-01-23 11:55:54.429237476 +0000 UTC m=+198.598309702" Jan 23 11:55:54 crc kubenswrapper[4865]: I0123 11:55:54.431028 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=5.431021519 podStartE2EDuration="5.431021519s" podCreationTimestamp="2026-01-23 11:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:55:54.426251235 +0000 UTC m=+198.595323471" watchObservedRunningTime="2026-01-23 11:55:54.431021519 +0000 UTC m=+198.600093745" Jan 23 11:55:54 crc kubenswrapper[4865]: I0123 11:55:54.455648 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5g2xj" podStartSLOduration=4.152495703 podStartE2EDuration="51.455630498s" podCreationTimestamp="2026-01-23 11:55:03 +0000 UTC" firstStartedPulling="2026-01-23 11:55:06.433500583 +0000 UTC m=+150.602572809" lastFinishedPulling="2026-01-23 11:55:53.736635378 +0000 UTC m=+197.905707604" observedRunningTime="2026-01-23 11:55:54.452154575 +0000 UTC m=+198.621226801" watchObservedRunningTime="2026-01-23 11:55:54.455630498 +0000 UTC m=+198.624702724" Jan 23 11:55:55 crc kubenswrapper[4865]: I0123 11:55:55.626478 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 11:55:55 crc kubenswrapper[4865]: I0123 11:55:55.716524 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/801cf885-df69-4405-b33e-b31f8d4d7fdf-kube-api-access\") pod \"801cf885-df69-4405-b33e-b31f8d4d7fdf\" (UID: \"801cf885-df69-4405-b33e-b31f8d4d7fdf\") " Jan 23 11:55:55 crc kubenswrapper[4865]: I0123 11:55:55.716620 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/801cf885-df69-4405-b33e-b31f8d4d7fdf-kubelet-dir\") pod \"801cf885-df69-4405-b33e-b31f8d4d7fdf\" (UID: \"801cf885-df69-4405-b33e-b31f8d4d7fdf\") " Jan 23 11:55:55 crc kubenswrapper[4865]: I0123 11:55:55.716942 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/801cf885-df69-4405-b33e-b31f8d4d7fdf-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "801cf885-df69-4405-b33e-b31f8d4d7fdf" (UID: "801cf885-df69-4405-b33e-b31f8d4d7fdf"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:55:55 crc kubenswrapper[4865]: I0123 11:55:55.722785 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801cf885-df69-4405-b33e-b31f8d4d7fdf-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "801cf885-df69-4405-b33e-b31f8d4d7fdf" (UID: "801cf885-df69-4405-b33e-b31f8d4d7fdf"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:55:55 crc kubenswrapper[4865]: I0123 11:55:55.818695 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/801cf885-df69-4405-b33e-b31f8d4d7fdf-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:55 crc kubenswrapper[4865]: I0123 11:55:55.818728 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/801cf885-df69-4405-b33e-b31f8d4d7fdf-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 11:55:56 crc kubenswrapper[4865]: I0123 11:55:56.385543 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"801cf885-df69-4405-b33e-b31f8d4d7fdf","Type":"ContainerDied","Data":"dfcb4656c079a79f3326d39a7fef68067cd5dae80468893437c9d2160b31670b"} Jan 23 11:55:56 crc kubenswrapper[4865]: I0123 11:55:56.385591 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfcb4656c079a79f3326d39a7fef68067cd5dae80468893437c9d2160b31670b" Jan 23 11:55:56 crc kubenswrapper[4865]: I0123 11:55:56.385592 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 11:55:59 crc kubenswrapper[4865]: I0123 11:55:59.210545 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-655bbcfcff-459vj"] Jan 23 11:55:59 crc kubenswrapper[4865]: I0123 11:55:59.210816 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" podUID="729ee089-9af6-481a-bd47-e2a4105995bd" containerName="controller-manager" containerID="cri-o://909fbb9e607b1637f02e76ef6ff813e722415175cc60c7bdef545d7b8bd4837c" gracePeriod=30 Jan 23 11:55:59 crc kubenswrapper[4865]: I0123 11:55:59.226328 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb"] Jan 23 11:55:59 crc kubenswrapper[4865]: I0123 11:55:59.226879 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" podUID="a0f53229-487d-420f-9323-5ec48b97f717" containerName="route-controller-manager" containerID="cri-o://eed77cbfa9538e096bf9e4d5cc5d75228a65f5f4243c6db217280d33704c81fe" gracePeriod=30 Jan 23 11:56:00 crc kubenswrapper[4865]: I0123 11:56:00.584527 4865 patch_prober.go:28] interesting pod/controller-manager-655bbcfcff-459vj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 23 11:56:00 crc kubenswrapper[4865]: I0123 11:56:00.585134 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" podUID="729ee089-9af6-481a-bd47-e2a4105995bd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 23 11:56:02 crc kubenswrapper[4865]: I0123 11:56:02.589681 4865 patch_prober.go:28] interesting pod/route-controller-manager-795f4b9585-vg6fb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Jan 23 11:56:02 crc kubenswrapper[4865]: I0123 11:56:02.589766 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" podUID="a0f53229-487d-420f-9323-5ec48b97f717" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Jan 23 11:56:03 crc kubenswrapper[4865]: I0123 11:56:03.843228 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:56:03 crc kubenswrapper[4865]: I0123 11:56:03.843715 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:56:04 crc kubenswrapper[4865]: I0123 11:56:04.896672 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.306909 4865 generic.go:334] "Generic (PLEG): container finished" podID="a0f53229-487d-420f-9323-5ec48b97f717" containerID="eed77cbfa9538e096bf9e4d5cc5d75228a65f5f4243c6db217280d33704c81fe" exitCode=0 Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.307429 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" event={"ID":"a0f53229-487d-420f-9323-5ec48b97f717","Type":"ContainerDied","Data":"eed77cbfa9538e096bf9e4d5cc5d75228a65f5f4243c6db217280d33704c81fe"} Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.311324 4865 generic.go:334] "Generic (PLEG): container finished" podID="729ee089-9af6-481a-bd47-e2a4105995bd" containerID="909fbb9e607b1637f02e76ef6ff813e722415175cc60c7bdef545d7b8bd4837c" exitCode=0 Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.311361 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" event={"ID":"729ee089-9af6-481a-bd47-e2a4105995bd","Type":"ContainerDied","Data":"909fbb9e607b1637f02e76ef6ff813e722415175cc60c7bdef545d7b8bd4837c"} Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.372342 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.375771 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.400550 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6748474664-5fkzx"] Jan 23 11:56:05 crc kubenswrapper[4865]: E0123 11:56:05.400817 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729ee089-9af6-481a-bd47-e2a4105995bd" containerName="controller-manager" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.400837 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="729ee089-9af6-481a-bd47-e2a4105995bd" containerName="controller-manager" Jan 23 11:56:05 crc kubenswrapper[4865]: E0123 11:56:05.400854 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801cf885-df69-4405-b33e-b31f8d4d7fdf" containerName="pruner" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.400861 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="801cf885-df69-4405-b33e-b31f8d4d7fdf" containerName="pruner" Jan 23 11:56:05 crc kubenswrapper[4865]: E0123 11:56:05.400879 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0f53229-487d-420f-9323-5ec48b97f717" containerName="route-controller-manager" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.400886 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0f53229-487d-420f-9323-5ec48b97f717" containerName="route-controller-manager" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.400980 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="801cf885-df69-4405-b33e-b31f8d4d7fdf" containerName="pruner" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.400990 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0f53229-487d-420f-9323-5ec48b97f717" containerName="route-controller-manager" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.401002 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="729ee089-9af6-481a-bd47-e2a4105995bd" containerName="controller-manager" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.401402 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.411130 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6748474664-5fkzx"] Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.474760 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p74jl\" (UniqueName: \"kubernetes.io/projected/a0f53229-487d-420f-9323-5ec48b97f717-kube-api-access-p74jl\") pod \"a0f53229-487d-420f-9323-5ec48b97f717\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.474805 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f53229-487d-420f-9323-5ec48b97f717-serving-cert\") pod \"a0f53229-487d-420f-9323-5ec48b97f717\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.474856 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-config\") pod \"729ee089-9af6-481a-bd47-e2a4105995bd\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.474886 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-client-ca\") pod \"729ee089-9af6-481a-bd47-e2a4105995bd\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.474937 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-client-ca\") pod \"a0f53229-487d-420f-9323-5ec48b97f717\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.474960 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/729ee089-9af6-481a-bd47-e2a4105995bd-serving-cert\") pod \"729ee089-9af6-481a-bd47-e2a4105995bd\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.474990 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-proxy-ca-bundles\") pod \"729ee089-9af6-481a-bd47-e2a4105995bd\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.475037 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh96l\" (UniqueName: \"kubernetes.io/projected/729ee089-9af6-481a-bd47-e2a4105995bd-kube-api-access-jh96l\") pod \"729ee089-9af6-481a-bd47-e2a4105995bd\" (UID: \"729ee089-9af6-481a-bd47-e2a4105995bd\") " Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.475063 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-config\") pod \"a0f53229-487d-420f-9323-5ec48b97f717\" (UID: \"a0f53229-487d-420f-9323-5ec48b97f717\") " Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.475214 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7czfx\" (UniqueName: \"kubernetes.io/projected/2d08dd61-4283-4b88-bb60-4c37c35e070a-kube-api-access-7czfx\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.475248 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-config\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.475279 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d08dd61-4283-4b88-bb60-4c37c35e070a-serving-cert\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.475327 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-client-ca\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.475356 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-proxy-ca-bundles\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.475744 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "729ee089-9af6-481a-bd47-e2a4105995bd" (UID: "729ee089-9af6-481a-bd47-e2a4105995bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.475923 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-config" (OuterVolumeSpecName: "config") pod "a0f53229-487d-420f-9323-5ec48b97f717" (UID: "a0f53229-487d-420f-9323-5ec48b97f717"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.476246 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "729ee089-9af6-481a-bd47-e2a4105995bd" (UID: "729ee089-9af6-481a-bd47-e2a4105995bd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.476278 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0f53229-487d-420f-9323-5ec48b97f717" (UID: "a0f53229-487d-420f-9323-5ec48b97f717"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.476330 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-config" (OuterVolumeSpecName: "config") pod "729ee089-9af6-481a-bd47-e2a4105995bd" (UID: "729ee089-9af6-481a-bd47-e2a4105995bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.480875 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0f53229-487d-420f-9323-5ec48b97f717-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0f53229-487d-420f-9323-5ec48b97f717" (UID: "a0f53229-487d-420f-9323-5ec48b97f717"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.481431 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/729ee089-9af6-481a-bd47-e2a4105995bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "729ee089-9af6-481a-bd47-e2a4105995bd" (UID: "729ee089-9af6-481a-bd47-e2a4105995bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.488056 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729ee089-9af6-481a-bd47-e2a4105995bd-kube-api-access-jh96l" (OuterVolumeSpecName: "kube-api-access-jh96l") pod "729ee089-9af6-481a-bd47-e2a4105995bd" (UID: "729ee089-9af6-481a-bd47-e2a4105995bd"). InnerVolumeSpecName "kube-api-access-jh96l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.495031 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0f53229-487d-420f-9323-5ec48b97f717-kube-api-access-p74jl" (OuterVolumeSpecName: "kube-api-access-p74jl") pod "a0f53229-487d-420f-9323-5ec48b97f717" (UID: "a0f53229-487d-420f-9323-5ec48b97f717"). InnerVolumeSpecName "kube-api-access-p74jl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.576327 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-client-ca\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.576820 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-proxy-ca-bundles\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.576856 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7czfx\" (UniqueName: \"kubernetes.io/projected/2d08dd61-4283-4b88-bb60-4c37c35e070a-kube-api-access-7czfx\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.576909 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-config\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.576933 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d08dd61-4283-4b88-bb60-4c37c35e070a-serving-cert\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.576984 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.577005 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.577015 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/729ee089-9af6-481a-bd47-e2a4105995bd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.577027 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.577047 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh96l\" (UniqueName: \"kubernetes.io/projected/729ee089-9af6-481a-bd47-e2a4105995bd-kube-api-access-jh96l\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.577058 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0f53229-487d-420f-9323-5ec48b97f717-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.577068 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p74jl\" (UniqueName: \"kubernetes.io/projected/a0f53229-487d-420f-9323-5ec48b97f717-kube-api-access-p74jl\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.577085 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0f53229-487d-420f-9323-5ec48b97f717-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.577094 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729ee089-9af6-481a-bd47-e2a4105995bd-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.578979 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-client-ca\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.579777 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-proxy-ca-bundles\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.581119 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-config\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.588764 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d08dd61-4283-4b88-bb60-4c37c35e070a-serving-cert\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.600934 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7czfx\" (UniqueName: \"kubernetes.io/projected/2d08dd61-4283-4b88-bb60-4c37c35e070a-kube-api-access-7czfx\") pod \"controller-manager-6748474664-5fkzx\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:05 crc kubenswrapper[4865]: I0123 11:56:05.716887 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.112572 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6748474664-5fkzx"] Jan 23 11:56:06 crc kubenswrapper[4865]: W0123 11:56:06.141740 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d08dd61_4283_4b88_bb60_4c37c35e070a.slice/crio-90aaf6217ace04e740cf5ced4acc109561e6566553b2df509a758e9f411878df WatchSource:0}: Error finding container 90aaf6217ace04e740cf5ced4acc109561e6566553b2df509a758e9f411878df: Status 404 returned error can't find the container with id 90aaf6217ace04e740cf5ced4acc109561e6566553b2df509a758e9f411878df Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.319994 4865 generic.go:334] "Generic (PLEG): container finished" podID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerID="3dba2e3ecae649b301ef84aeb8fc87aaedb4147fcaa0c53e0c504d233de47bf1" exitCode=0 Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.320047 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gzh6l" event={"ID":"8f13b08e-48a9-423c-ae8c-d1b13239074d","Type":"ContainerDied","Data":"3dba2e3ecae649b301ef84aeb8fc87aaedb4147fcaa0c53e0c504d233de47bf1"} Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.327000 4865 generic.go:334] "Generic (PLEG): container finished" podID="752a7b3b-7850-4bba-b8ce-be070452a538" containerID="8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf" exitCode=0 Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.327070 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5sbt" event={"ID":"752a7b3b-7850-4bba-b8ce-be070452a538","Type":"ContainerDied","Data":"8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf"} Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.333727 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvmlx" event={"ID":"abe1c851-e34c-49e4-991b-cec1c55a25b6","Type":"ContainerStarted","Data":"65438d8bb6b1cae701668ef39ea835d20c87c4686de82a798af131dfa01ce3b0"} Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.337667 4865 generic.go:334] "Generic (PLEG): container finished" podID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerID="554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e" exitCode=0 Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.337803 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l76kv" event={"ID":"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9","Type":"ContainerDied","Data":"554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e"} Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.348218 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" event={"ID":"a0f53229-487d-420f-9323-5ec48b97f717","Type":"ContainerDied","Data":"fea520d7e597e8f8fe2a9cd89f4b24398925e512974ec390640229bf4afad3bc"} Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.348358 4865 scope.go:117] "RemoveContainer" containerID="eed77cbfa9538e096bf9e4d5cc5d75228a65f5f4243c6db217280d33704c81fe" Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.348504 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb" Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.351014 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" event={"ID":"2d08dd61-4283-4b88-bb60-4c37c35e070a","Type":"ContainerStarted","Data":"b272ae6a4fc81a3df3c441192e2a34271025083af39b615f644c1f7a939f0f79"} Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.351038 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" event={"ID":"2d08dd61-4283-4b88-bb60-4c37c35e070a","Type":"ContainerStarted","Data":"90aaf6217ace04e740cf5ced4acc109561e6566553b2df509a758e9f411878df"} Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.351704 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.353719 4865 patch_prober.go:28] interesting pod/controller-manager-6748474664-5fkzx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.353757 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" podUID="2d08dd61-4283-4b88-bb60-4c37c35e070a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.354203 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" event={"ID":"729ee089-9af6-481a-bd47-e2a4105995bd","Type":"ContainerDied","Data":"57cf033858bbb15409621e0859f21d6c6f7dab4de8a4cf94427bb694c0e57b8d"} Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.354333 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-655bbcfcff-459vj" Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.391044 4865 scope.go:117] "RemoveContainer" containerID="909fbb9e607b1637f02e76ef6ff813e722415175cc60c7bdef545d7b8bd4837c" Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.411451 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb"] Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.416337 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-795f4b9585-vg6fb"] Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.417394 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.451974 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" podStartSLOduration=7.451953632 podStartE2EDuration="7.451953632s" podCreationTimestamp="2026-01-23 11:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:56:06.431208911 +0000 UTC m=+210.600281137" watchObservedRunningTime="2026-01-23 11:56:06.451953632 +0000 UTC m=+210.621025858" Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.453152 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-655bbcfcff-459vj"] Jan 23 11:56:06 crc kubenswrapper[4865]: I0123 11:56:06.458822 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-655bbcfcff-459vj"] Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.150850 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g2xj"] Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.368837 4865 generic.go:334] "Generic (PLEG): container finished" podID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerID="65438d8bb6b1cae701668ef39ea835d20c87c4686de82a798af131dfa01ce3b0" exitCode=0 Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.368934 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvmlx" event={"ID":"abe1c851-e34c-49e4-991b-cec1c55a25b6","Type":"ContainerDied","Data":"65438d8bb6b1cae701668ef39ea835d20c87c4686de82a798af131dfa01ce3b0"} Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.379768 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.684246 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng"] Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.685078 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.687564 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.687920 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.688677 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.688822 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.689760 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.691329 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.704268 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng"] Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.813275 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-config\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.813372 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxll7\" (UniqueName: \"kubernetes.io/projected/aa64ba84-e782-4319-8cba-d478e214e200-kube-api-access-sxll7\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.813409 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-client-ca\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.813522 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa64ba84-e782-4319-8cba-d478e214e200-serving-cert\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.914982 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-config\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.915038 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxll7\" (UniqueName: \"kubernetes.io/projected/aa64ba84-e782-4319-8cba-d478e214e200-kube-api-access-sxll7\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.915083 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-client-ca\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.915120 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa64ba84-e782-4319-8cba-d478e214e200-serving-cert\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.917002 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-config\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.917816 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-client-ca\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.924716 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa64ba84-e782-4319-8cba-d478e214e200-serving-cert\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:07 crc kubenswrapper[4865]: I0123 11:56:07.942297 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxll7\" (UniqueName: \"kubernetes.io/projected/aa64ba84-e782-4319-8cba-d478e214e200-kube-api-access-sxll7\") pod \"route-controller-manager-65cd57845b-6gnng\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:08 crc kubenswrapper[4865]: I0123 11:56:08.001251 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:08 crc kubenswrapper[4865]: I0123 11:56:08.133537 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729ee089-9af6-481a-bd47-e2a4105995bd" path="/var/lib/kubelet/pods/729ee089-9af6-481a-bd47-e2a4105995bd/volumes" Jan 23 11:56:08 crc kubenswrapper[4865]: I0123 11:56:08.138911 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0f53229-487d-420f-9323-5ec48b97f717" path="/var/lib/kubelet/pods/a0f53229-487d-420f-9323-5ec48b97f717/volumes" Jan 23 11:56:08 crc kubenswrapper[4865]: I0123 11:56:08.382810 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5g2xj" podUID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerName="registry-server" containerID="cri-o://8e7737ae16de3146d8c4ee9d55e25526a7897bc2af4e0c736fa876af9fc9283f" gracePeriod=2 Jan 23 11:56:08 crc kubenswrapper[4865]: I0123 11:56:08.449826 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng"] Jan 23 11:56:09 crc kubenswrapper[4865]: I0123 11:56:09.389939 4865 generic.go:334] "Generic (PLEG): container finished" podID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerID="8e7737ae16de3146d8c4ee9d55e25526a7897bc2af4e0c736fa876af9fc9283f" exitCode=0 Jan 23 11:56:09 crc kubenswrapper[4865]: I0123 11:56:09.390010 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g2xj" event={"ID":"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec","Type":"ContainerDied","Data":"8e7737ae16de3146d8c4ee9d55e25526a7897bc2af4e0c736fa876af9fc9283f"} Jan 23 11:56:09 crc kubenswrapper[4865]: I0123 11:56:09.392346 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" event={"ID":"aa64ba84-e782-4319-8cba-d478e214e200","Type":"ContainerStarted","Data":"5abe419e5441b4d1192c98148ed978fbece8cae38c83d27035e8cd18c27f1f5c"} Jan 23 11:56:09 crc kubenswrapper[4865]: I0123 11:56:09.392389 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" event={"ID":"aa64ba84-e782-4319-8cba-d478e214e200","Type":"ContainerStarted","Data":"0030a8f85659948aab204898419c3694b08160c7732c0287240c0fb1c36c40cf"} Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.048338 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.121806 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" podUID="67fa6cdb-c380-4b05-a05d-9df4a4b86019" containerName="oauth-openshift" containerID="cri-o://53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81" gracePeriod=15 Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.197128 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-catalog-content\") pod \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.197205 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v6fc\" (UniqueName: \"kubernetes.io/projected/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-kube-api-access-9v6fc\") pod \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.197246 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-utilities\") pod \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\" (UID: \"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec\") " Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.198414 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-utilities" (OuterVolumeSpecName: "utilities") pod "0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" (UID: "0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.206021 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-kube-api-access-9v6fc" (OuterVolumeSpecName: "kube-api-access-9v6fc") pod "0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" (UID: "0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec"). InnerVolumeSpecName "kube-api-access-9v6fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.225904 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" (UID: "0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.299288 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.299330 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v6fc\" (UniqueName: \"kubernetes.io/projected/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-kube-api-access-9v6fc\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.299347 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.398709 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5g2xj" event={"ID":"0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec","Type":"ContainerDied","Data":"6bdc3633096dbf8f1c178eef2282e3d6e04e8905c5f5bb5d184241bd32f3730f"} Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.398766 4865 scope.go:117] "RemoveContainer" containerID="8e7737ae16de3146d8c4ee9d55e25526a7897bc2af4e0c736fa876af9fc9283f" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.398893 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5g2xj" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.408938 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gzh6l" event={"ID":"8f13b08e-48a9-423c-ae8c-d1b13239074d","Type":"ContainerStarted","Data":"a494e419553bb9e338dd8fbcd3b6faeb864316e9fb8416c77890d81a0a6a20b0"} Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.423660 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5sbt" event={"ID":"752a7b3b-7850-4bba-b8ce-be070452a538","Type":"ContainerStarted","Data":"86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919"} Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.427519 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvmlx" event={"ID":"abe1c851-e34c-49e4-991b-cec1c55a25b6","Type":"ContainerStarted","Data":"c24f78a98d4c5f878ef8d78443f41933f409a879cd25c94a745043c88df9f97f"} Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.431702 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zc4" event={"ID":"777ae5a8-8d44-4d0d-a598-d1782fcc9585","Type":"ContainerStarted","Data":"9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849"} Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.431781 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.436959 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.443231 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gzh6l" podStartSLOduration=3.907779369 podStartE2EDuration="1m9.443214257s" podCreationTimestamp="2026-01-23 11:55:01 +0000 UTC" firstStartedPulling="2026-01-23 11:55:03.991522986 +0000 UTC m=+148.160595212" lastFinishedPulling="2026-01-23 11:56:09.526957874 +0000 UTC m=+213.696030100" observedRunningTime="2026-01-23 11:56:10.440717322 +0000 UTC m=+214.609789558" watchObservedRunningTime="2026-01-23 11:56:10.443214257 +0000 UTC m=+214.612286493" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.468689 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" podStartSLOduration=11.46864096 podStartE2EDuration="11.46864096s" podCreationTimestamp="2026-01-23 11:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:56:10.466052912 +0000 UTC m=+214.635125138" watchObservedRunningTime="2026-01-23 11:56:10.46864096 +0000 UTC m=+214.637713226" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.515778 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s5sbt" podStartSLOduration=5.034447878 podStartE2EDuration="1m9.515757597s" podCreationTimestamp="2026-01-23 11:55:01 +0000 UTC" firstStartedPulling="2026-01-23 11:55:05.203360916 +0000 UTC m=+149.372433142" lastFinishedPulling="2026-01-23 11:56:09.684670635 +0000 UTC m=+213.853742861" observedRunningTime="2026-01-23 11:56:10.502981664 +0000 UTC m=+214.672053890" watchObservedRunningTime="2026-01-23 11:56:10.515757597 +0000 UTC m=+214.684829823" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.517954 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g2xj"] Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.522194 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5g2xj"] Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.523037 4865 scope.go:117] "RemoveContainer" containerID="245c2190d4552b0978a570b1ef4495204fa712e61f572f3bf405cf415bebddab" Jan 23 11:56:10 crc kubenswrapper[4865]: I0123 11:56:10.548336 4865 scope.go:117] "RemoveContainer" containerID="6160ccb531d53df90e88749b3557d63e1d1ded7a7c27bcf59aecd158a8bba92e" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.241524 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.318876 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-ocp-branding-template\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.318919 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-router-certs\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.318946 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-idp-0-file-data\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.318978 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-serving-cert\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.319007 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-trusted-ca-bundle\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.319251 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.319871 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.319031 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-dir\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.319977 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-login\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.321102 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-session\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.321151 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-cliconfig\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.321172 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-error\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.321194 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-policies\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.321220 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-service-ca\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.321252 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xnfn\" (UniqueName: \"kubernetes.io/projected/67fa6cdb-c380-4b05-a05d-9df4a4b86019-kube-api-access-7xnfn\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.321275 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-provider-selection\") pod \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\" (UID: \"67fa6cdb-c380-4b05-a05d-9df4a4b86019\") " Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.321438 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.321452 4865 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.325248 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.331300 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.332507 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.332944 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.335324 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.337216 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.338479 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.339127 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.339904 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67fa6cdb-c380-4b05-a05d-9df4a4b86019-kube-api-access-7xnfn" (OuterVolumeSpecName: "kube-api-access-7xnfn") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "kube-api-access-7xnfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.347002 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.347567 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.348017 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "67fa6cdb-c380-4b05-a05d-9df4a4b86019" (UID: "67fa6cdb-c380-4b05-a05d-9df4a4b86019"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423442 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423488 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423500 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423511 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423522 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423542 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423556 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423573 4865 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423584 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423611 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xnfn\" (UniqueName: \"kubernetes.io/projected/67fa6cdb-c380-4b05-a05d-9df4a4b86019-kube-api-access-7xnfn\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423624 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.423637 4865 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/67fa6cdb-c380-4b05-a05d-9df4a4b86019-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.440129 4865 generic.go:334] "Generic (PLEG): container finished" podID="67fa6cdb-c380-4b05-a05d-9df4a4b86019" containerID="53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81" exitCode=0 Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.440267 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.440979 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" event={"ID":"67fa6cdb-c380-4b05-a05d-9df4a4b86019","Type":"ContainerDied","Data":"53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81"} Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.441049 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gk4fh" event={"ID":"67fa6cdb-c380-4b05-a05d-9df4a4b86019","Type":"ContainerDied","Data":"14cb46d8e83ead72545c309bf2015bc390250d8b319ee04e8ea75d6df879f032"} Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.441076 4865 scope.go:117] "RemoveContainer" containerID="53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.448431 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l76kv" event={"ID":"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9","Type":"ContainerStarted","Data":"a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991"} Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.456095 4865 generic.go:334] "Generic (PLEG): container finished" podID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerID="9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849" exitCode=0 Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.456168 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zc4" event={"ID":"777ae5a8-8d44-4d0d-a598-d1782fcc9585","Type":"ContainerDied","Data":"9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849"} Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.472387 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l76kv" podStartSLOduration=3.95310319 podStartE2EDuration="1m7.472358782s" podCreationTimestamp="2026-01-23 11:55:04 +0000 UTC" firstStartedPulling="2026-01-23 11:55:06.332258092 +0000 UTC m=+150.501330318" lastFinishedPulling="2026-01-23 11:56:09.851513684 +0000 UTC m=+214.020585910" observedRunningTime="2026-01-23 11:56:11.471439778 +0000 UTC m=+215.640512004" watchObservedRunningTime="2026-01-23 11:56:11.472358782 +0000 UTC m=+215.641431008" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.495726 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xvmlx" podStartSLOduration=5.265605991 podStartE2EDuration="1m7.49571077s" podCreationTimestamp="2026-01-23 11:55:04 +0000 UTC" firstStartedPulling="2026-01-23 11:55:07.493369378 +0000 UTC m=+151.662441604" lastFinishedPulling="2026-01-23 11:56:09.723474157 +0000 UTC m=+213.892546383" observedRunningTime="2026-01-23 11:56:11.494568741 +0000 UTC m=+215.663640967" watchObservedRunningTime="2026-01-23 11:56:11.49571077 +0000 UTC m=+215.664782996" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.531490 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gk4fh"] Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.534761 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gk4fh"] Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.859215 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:56:11 crc kubenswrapper[4865]: I0123 11:56:11.861577 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:56:12 crc kubenswrapper[4865]: I0123 11:56:12.125707 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" path="/var/lib/kubelet/pods/0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec/volumes" Jan 23 11:56:12 crc kubenswrapper[4865]: I0123 11:56:12.126412 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67fa6cdb-c380-4b05-a05d-9df4a4b86019" path="/var/lib/kubelet/pods/67fa6cdb-c380-4b05-a05d-9df4a4b86019/volumes" Jan 23 11:56:12 crc kubenswrapper[4865]: I0123 11:56:12.378270 4865 scope.go:117] "RemoveContainer" containerID="53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81" Jan 23 11:56:12 crc kubenswrapper[4865]: E0123 11:56:12.379107 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81\": container with ID starting with 53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81 not found: ID does not exist" containerID="53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81" Jan 23 11:56:12 crc kubenswrapper[4865]: I0123 11:56:12.379172 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81"} err="failed to get container status \"53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81\": rpc error: code = NotFound desc = could not find container \"53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81\": container with ID starting with 53175871fe784cd655dbd156611e06da92db001afe912bd2e266e926c1abde81 not found: ID does not exist" Jan 23 11:56:12 crc kubenswrapper[4865]: I0123 11:56:12.593734 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:56:12 crc kubenswrapper[4865]: I0123 11:56:12.594340 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:56:12 crc kubenswrapper[4865]: I0123 11:56:12.640218 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:56:12 crc kubenswrapper[4865]: I0123 11:56:12.901261 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gzh6l" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerName="registry-server" probeResult="failure" output=< Jan 23 11:56:12 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 11:56:12 crc kubenswrapper[4865]: > Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.689451 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7489ccbc46-6gcbp"] Jan 23 11:56:13 crc kubenswrapper[4865]: E0123 11:56:13.689773 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerName="extract-content" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.689793 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerName="extract-content" Jan 23 11:56:13 crc kubenswrapper[4865]: E0123 11:56:13.689809 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67fa6cdb-c380-4b05-a05d-9df4a4b86019" containerName="oauth-openshift" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.689817 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="67fa6cdb-c380-4b05-a05d-9df4a4b86019" containerName="oauth-openshift" Jan 23 11:56:13 crc kubenswrapper[4865]: E0123 11:56:13.689835 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerName="registry-server" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.689843 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerName="registry-server" Jan 23 11:56:13 crc kubenswrapper[4865]: E0123 11:56:13.689857 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerName="extract-utilities" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.689864 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerName="extract-utilities" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.690005 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ee30e59-43d0-46ad-88d0-fcafd1d3e7ec" containerName="registry-server" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.690023 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="67fa6cdb-c380-4b05-a05d-9df4a4b86019" containerName="oauth-openshift" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.690567 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.696225 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.696829 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.696932 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.697052 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.697103 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.697294 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.697344 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.697649 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.697804 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.698020 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.698242 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.700745 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.715710 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.716515 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7489ccbc46-6gcbp"] Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.724917 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.740509 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862024 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-session\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862088 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862123 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862154 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-template-login\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862193 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862224 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-template-error\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862256 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-audit-policies\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862312 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862341 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a51b0d26-bdc8-433f-90e5-d90b9bd94373-audit-dir\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862414 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-router-certs\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862493 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-service-ca\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862527 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862555 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.862582 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6dj\" (UniqueName: \"kubernetes.io/projected/a51b0d26-bdc8-433f-90e5-d90b9bd94373-kube-api-access-8p6dj\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963178 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963226 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963253 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-template-login\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963286 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963310 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-template-error\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963334 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-audit-policies\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963366 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963393 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a51b0d26-bdc8-433f-90e5-d90b9bd94373-audit-dir\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963417 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-router-certs\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963447 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-service-ca\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963474 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963499 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963523 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6dj\" (UniqueName: \"kubernetes.io/projected/a51b0d26-bdc8-433f-90e5-d90b9bd94373-kube-api-access-8p6dj\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.963556 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-session\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.965172 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-service-ca\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.965787 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-audit-policies\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.966045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.968273 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.968296 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-template-error\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.968423 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-session\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.968482 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a51b0d26-bdc8-433f-90e5-d90b9bd94373-audit-dir\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.968795 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.969974 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.970243 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-router-certs\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.973161 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.989253 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-user-template-login\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.990263 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a51b0d26-bdc8-433f-90e5-d90b9bd94373-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:13 crc kubenswrapper[4865]: I0123 11:56:13.992631 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6dj\" (UniqueName: \"kubernetes.io/projected/a51b0d26-bdc8-433f-90e5-d90b9bd94373-kube-api-access-8p6dj\") pod \"oauth-openshift-7489ccbc46-6gcbp\" (UID: \"a51b0d26-bdc8-433f-90e5-d90b9bd94373\") " pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:14 crc kubenswrapper[4865]: I0123 11:56:14.008617 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:14 crc kubenswrapper[4865]: I0123 11:56:14.823017 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:56:14 crc kubenswrapper[4865]: I0123 11:56:14.823100 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:56:15 crc kubenswrapper[4865]: I0123 11:56:15.150140 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:56:15 crc kubenswrapper[4865]: I0123 11:56:15.150501 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:56:15 crc kubenswrapper[4865]: I0123 11:56:15.876477 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l76kv" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerName="registry-server" probeResult="failure" output=< Jan 23 11:56:15 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 11:56:15 crc kubenswrapper[4865]: > Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.199794 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xvmlx" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerName="registry-server" probeResult="failure" output=< Jan 23 11:56:16 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 11:56:16 crc kubenswrapper[4865]: > Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.226254 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7489ccbc46-6gcbp"] Jan 23 11:56:16 crc kubenswrapper[4865]: W0123 11:56:16.233557 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda51b0d26_bdc8_433f_90e5_d90b9bd94373.slice/crio-14b28dc698e8ea8eea0f007ae69abaf5451909df4fc0a61a2c076929a7e0de8a WatchSource:0}: Error finding container 14b28dc698e8ea8eea0f007ae69abaf5451909df4fc0a61a2c076929a7e0de8a: Status 404 returned error can't find the container with id 14b28dc698e8ea8eea0f007ae69abaf5451909df4fc0a61a2c076929a7e0de8a Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.490692 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerID="7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9" exitCode=0 Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.491086 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nf9" event={"ID":"2d701fdb-266c-4e83-a0b6-099bfd0987a9","Type":"ContainerDied","Data":"7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9"} Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.495903 4865 generic.go:334] "Generic (PLEG): container finished" podID="778153be-8013-460c-8000-e58ba9f45cd9" containerID="7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23" exitCode=0 Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.496080 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w54z" event={"ID":"778153be-8013-460c-8000-e58ba9f45cd9","Type":"ContainerDied","Data":"7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23"} Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.514572 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zc4" event={"ID":"777ae5a8-8d44-4d0d-a598-d1782fcc9585","Type":"ContainerStarted","Data":"ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a"} Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.523682 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" event={"ID":"a51b0d26-bdc8-433f-90e5-d90b9bd94373","Type":"ContainerStarted","Data":"14b28dc698e8ea8eea0f007ae69abaf5451909df4fc0a61a2c076929a7e0de8a"} Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.524789 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.526007 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" start-of-body= Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.526062 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.618040 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h4zc4" podStartSLOduration=5.122597707 podStartE2EDuration="1m15.618014197s" podCreationTimestamp="2026-01-23 11:55:01 +0000 UTC" firstStartedPulling="2026-01-23 11:55:05.203057178 +0000 UTC m=+149.372129394" lastFinishedPulling="2026-01-23 11:56:15.698473658 +0000 UTC m=+219.867545884" observedRunningTime="2026-01-23 11:56:16.617063112 +0000 UTC m=+220.786135348" watchObservedRunningTime="2026-01-23 11:56:16.618014197 +0000 UTC m=+220.787086443" Jan 23 11:56:16 crc kubenswrapper[4865]: I0123 11:56:16.619196 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podStartSLOduration=31.619187947 podStartE2EDuration="31.619187947s" podCreationTimestamp="2026-01-23 11:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:56:16.584698458 +0000 UTC m=+220.753770714" watchObservedRunningTime="2026-01-23 11:56:16.619187947 +0000 UTC m=+220.788260183" Jan 23 11:56:17 crc kubenswrapper[4865]: I0123 11:56:17.534641 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w54z" event={"ID":"778153be-8013-460c-8000-e58ba9f45cd9","Type":"ContainerStarted","Data":"5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b"} Jan 23 11:56:17 crc kubenswrapper[4865]: I0123 11:56:17.536717 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" event={"ID":"a51b0d26-bdc8-433f-90e5-d90b9bd94373","Type":"ContainerStarted","Data":"6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82"} Jan 23 11:56:17 crc kubenswrapper[4865]: I0123 11:56:17.542199 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nf9" event={"ID":"2d701fdb-266c-4e83-a0b6-099bfd0987a9","Type":"ContainerStarted","Data":"ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510"} Jan 23 11:56:17 crc kubenswrapper[4865]: I0123 11:56:17.544551 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 11:56:17 crc kubenswrapper[4865]: I0123 11:56:17.555152 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9w54z" podStartSLOduration=2.498160725 podStartE2EDuration="1m14.555140263s" podCreationTimestamp="2026-01-23 11:55:03 +0000 UTC" firstStartedPulling="2026-01-23 11:55:05.145195195 +0000 UTC m=+149.314267421" lastFinishedPulling="2026-01-23 11:56:17.202174733 +0000 UTC m=+221.371246959" observedRunningTime="2026-01-23 11:56:17.5538887 +0000 UTC m=+221.722960926" watchObservedRunningTime="2026-01-23 11:56:17.555140263 +0000 UTC m=+221.724212489" Jan 23 11:56:17 crc kubenswrapper[4865]: I0123 11:56:17.576047 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f5nf9" podStartSLOduration=3.674551979 podStartE2EDuration="1m16.576019287s" podCreationTimestamp="2026-01-23 11:55:01 +0000 UTC" firstStartedPulling="2026-01-23 11:55:04.029262629 +0000 UTC m=+148.198334855" lastFinishedPulling="2026-01-23 11:56:16.930729937 +0000 UTC m=+221.099802163" observedRunningTime="2026-01-23 11:56:17.570266778 +0000 UTC m=+221.739339004" watchObservedRunningTime="2026-01-23 11:56:17.576019287 +0000 UTC m=+221.745091513" Jan 23 11:56:18 crc kubenswrapper[4865]: I0123 11:56:18.776543 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 11:56:18 crc kubenswrapper[4865]: I0123 11:56:18.776947 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 11:56:18 crc kubenswrapper[4865]: I0123 11:56:18.777003 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:56:18 crc kubenswrapper[4865]: I0123 11:56:18.777634 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 11:56:18 crc kubenswrapper[4865]: I0123 11:56:18.777684 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9" gracePeriod=600 Jan 23 11:56:19 crc kubenswrapper[4865]: I0123 11:56:19.074694 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6748474664-5fkzx"] Jan 23 11:56:19 crc kubenswrapper[4865]: I0123 11:56:19.074943 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" podUID="2d08dd61-4283-4b88-bb60-4c37c35e070a" containerName="controller-manager" containerID="cri-o://b272ae6a4fc81a3df3c441192e2a34271025083af39b615f644c1f7a939f0f79" gracePeriod=30 Jan 23 11:56:19 crc kubenswrapper[4865]: I0123 11:56:19.159531 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng"] Jan 23 11:56:19 crc kubenswrapper[4865]: I0123 11:56:19.159857 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" podUID="aa64ba84-e782-4319-8cba-d478e214e200" containerName="route-controller-manager" containerID="cri-o://5abe419e5441b4d1192c98148ed978fbece8cae38c83d27035e8cd18c27f1f5c" gracePeriod=30 Jan 23 11:56:21 crc kubenswrapper[4865]: I0123 11:56:21.569142 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9" exitCode=0 Jan 23 11:56:21 crc kubenswrapper[4865]: I0123 11:56:21.569248 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9"} Jan 23 11:56:21 crc kubenswrapper[4865]: I0123 11:56:21.573152 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d08dd61-4283-4b88-bb60-4c37c35e070a" containerID="b272ae6a4fc81a3df3c441192e2a34271025083af39b615f644c1f7a939f0f79" exitCode=0 Jan 23 11:56:21 crc kubenswrapper[4865]: I0123 11:56:21.573215 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" event={"ID":"2d08dd61-4283-4b88-bb60-4c37c35e070a","Type":"ContainerDied","Data":"b272ae6a4fc81a3df3c441192e2a34271025083af39b615f644c1f7a939f0f79"} Jan 23 11:56:21 crc kubenswrapper[4865]: I0123 11:56:21.940227 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:56:21 crc kubenswrapper[4865]: I0123 11:56:21.995345 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:56:22 crc kubenswrapper[4865]: I0123 11:56:22.010780 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:56:22 crc kubenswrapper[4865]: I0123 11:56:22.010847 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:56:22 crc kubenswrapper[4865]: I0123 11:56:22.073150 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:56:22 crc kubenswrapper[4865]: I0123 11:56:22.581263 4865 generic.go:334] "Generic (PLEG): container finished" podID="aa64ba84-e782-4319-8cba-d478e214e200" containerID="5abe419e5441b4d1192c98148ed978fbece8cae38c83d27035e8cd18c27f1f5c" exitCode=0 Jan 23 11:56:22 crc kubenswrapper[4865]: I0123 11:56:22.582062 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" event={"ID":"aa64ba84-e782-4319-8cba-d478e214e200","Type":"ContainerDied","Data":"5abe419e5441b4d1192c98148ed978fbece8cae38c83d27035e8cd18c27f1f5c"} Jan 23 11:56:22 crc kubenswrapper[4865]: I0123 11:56:22.609067 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:56:22 crc kubenswrapper[4865]: I0123 11:56:22.609114 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:56:22 crc kubenswrapper[4865]: I0123 11:56:22.640152 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:56:22 crc kubenswrapper[4865]: I0123 11:56:22.642892 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:56:22 crc kubenswrapper[4865]: I0123 11:56:22.664986 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:56:23 crc kubenswrapper[4865]: I0123 11:56:23.151170 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gzh6l"] Jan 23 11:56:23 crc kubenswrapper[4865]: I0123 11:56:23.377771 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:56:23 crc kubenswrapper[4865]: I0123 11:56:23.377830 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:56:23 crc kubenswrapper[4865]: I0123 11:56:23.438046 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:56:23 crc kubenswrapper[4865]: I0123 11:56:23.588223 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gzh6l" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerName="registry-server" containerID="cri-o://a494e419553bb9e338dd8fbcd3b6faeb864316e9fb8416c77890d81a0a6a20b0" gracePeriod=2 Jan 23 11:56:23 crc kubenswrapper[4865]: I0123 11:56:23.639313 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:56:23 crc kubenswrapper[4865]: I0123 11:56:23.649228 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.121567 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.174995 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr"] Jan 23 11:56:24 crc kubenswrapper[4865]: E0123 11:56:24.175300 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa64ba84-e782-4319-8cba-d478e214e200" containerName="route-controller-manager" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.175323 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa64ba84-e782-4319-8cba-d478e214e200" containerName="route-controller-manager" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.175453 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa64ba84-e782-4319-8cba-d478e214e200" containerName="route-controller-manager" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.175971 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.182392 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.183284 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr"] Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.321575 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxll7\" (UniqueName: \"kubernetes.io/projected/aa64ba84-e782-4319-8cba-d478e214e200-kube-api-access-sxll7\") pod \"aa64ba84-e782-4319-8cba-d478e214e200\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.321641 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-config\") pod \"aa64ba84-e782-4319-8cba-d478e214e200\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.321679 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa64ba84-e782-4319-8cba-d478e214e200-serving-cert\") pod \"aa64ba84-e782-4319-8cba-d478e214e200\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.321803 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-config\") pod \"2d08dd61-4283-4b88-bb60-4c37c35e070a\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.321821 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-proxy-ca-bundles\") pod \"2d08dd61-4283-4b88-bb60-4c37c35e070a\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.321910 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7czfx\" (UniqueName: \"kubernetes.io/projected/2d08dd61-4283-4b88-bb60-4c37c35e070a-kube-api-access-7czfx\") pod \"2d08dd61-4283-4b88-bb60-4c37c35e070a\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.321946 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d08dd61-4283-4b88-bb60-4c37c35e070a-serving-cert\") pod \"2d08dd61-4283-4b88-bb60-4c37c35e070a\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.321980 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-client-ca\") pod \"2d08dd61-4283-4b88-bb60-4c37c35e070a\" (UID: \"2d08dd61-4283-4b88-bb60-4c37c35e070a\") " Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.322016 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-client-ca\") pod \"aa64ba84-e782-4319-8cba-d478e214e200\" (UID: \"aa64ba84-e782-4319-8cba-d478e214e200\") " Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.322246 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60877fc9-78f8-4298-8104-8cd90e28d3bd-config\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.322286 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60877fc9-78f8-4298-8104-8cd90e28d3bd-client-ca\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.322331 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60877fc9-78f8-4298-8104-8cd90e28d3bd-serving-cert\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.322441 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnkw7\" (UniqueName: \"kubernetes.io/projected/60877fc9-78f8-4298-8104-8cd90e28d3bd-kube-api-access-cnkw7\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.322579 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2d08dd61-4283-4b88-bb60-4c37c35e070a" (UID: "2d08dd61-4283-4b88-bb60-4c37c35e070a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.322886 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-client-ca" (OuterVolumeSpecName: "client-ca") pod "2d08dd61-4283-4b88-bb60-4c37c35e070a" (UID: "2d08dd61-4283-4b88-bb60-4c37c35e070a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.322965 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-client-ca" (OuterVolumeSpecName: "client-ca") pod "aa64ba84-e782-4319-8cba-d478e214e200" (UID: "aa64ba84-e782-4319-8cba-d478e214e200"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.322964 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-config" (OuterVolumeSpecName: "config") pod "2d08dd61-4283-4b88-bb60-4c37c35e070a" (UID: "2d08dd61-4283-4b88-bb60-4c37c35e070a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.322381 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-config" (OuterVolumeSpecName: "config") pod "aa64ba84-e782-4319-8cba-d478e214e200" (UID: "aa64ba84-e782-4319-8cba-d478e214e200"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.328787 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d08dd61-4283-4b88-bb60-4c37c35e070a-kube-api-access-7czfx" (OuterVolumeSpecName: "kube-api-access-7czfx") pod "2d08dd61-4283-4b88-bb60-4c37c35e070a" (UID: "2d08dd61-4283-4b88-bb60-4c37c35e070a"). InnerVolumeSpecName "kube-api-access-7czfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.328879 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa64ba84-e782-4319-8cba-d478e214e200-kube-api-access-sxll7" (OuterVolumeSpecName: "kube-api-access-sxll7") pod "aa64ba84-e782-4319-8cba-d478e214e200" (UID: "aa64ba84-e782-4319-8cba-d478e214e200"). InnerVolumeSpecName "kube-api-access-sxll7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.340533 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d08dd61-4283-4b88-bb60-4c37c35e070a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d08dd61-4283-4b88-bb60-4c37c35e070a" (UID: "2d08dd61-4283-4b88-bb60-4c37c35e070a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.344157 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa64ba84-e782-4319-8cba-d478e214e200-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aa64ba84-e782-4319-8cba-d478e214e200" (UID: "aa64ba84-e782-4319-8cba-d478e214e200"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.424309 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60877fc9-78f8-4298-8104-8cd90e28d3bd-config\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.424684 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60877fc9-78f8-4298-8104-8cd90e28d3bd-client-ca\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.424721 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60877fc9-78f8-4298-8104-8cd90e28d3bd-serving-cert\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.424919 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnkw7\" (UniqueName: \"kubernetes.io/projected/60877fc9-78f8-4298-8104-8cd90e28d3bd-kube-api-access-cnkw7\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.424984 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7czfx\" (UniqueName: \"kubernetes.io/projected/2d08dd61-4283-4b88-bb60-4c37c35e070a-kube-api-access-7czfx\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.425002 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d08dd61-4283-4b88-bb60-4c37c35e070a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.425015 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.425027 4865 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.425041 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxll7\" (UniqueName: \"kubernetes.io/projected/aa64ba84-e782-4319-8cba-d478e214e200-kube-api-access-sxll7\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.425051 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa64ba84-e782-4319-8cba-d478e214e200-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.425062 4865 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa64ba84-e782-4319-8cba-d478e214e200-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.425075 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-config\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.425086 4865 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2d08dd61-4283-4b88-bb60-4c37c35e070a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.425918 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60877fc9-78f8-4298-8104-8cd90e28d3bd-config\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.425927 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/60877fc9-78f8-4298-8104-8cd90e28d3bd-client-ca\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.429005 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60877fc9-78f8-4298-8104-8cd90e28d3bd-serving-cert\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.442460 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnkw7\" (UniqueName: \"kubernetes.io/projected/60877fc9-78f8-4298-8104-8cd90e28d3bd-kube-api-access-cnkw7\") pod \"route-controller-manager-6497cbfbf6-fkmfr\" (UID: \"60877fc9-78f8-4298-8104-8cd90e28d3bd\") " pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.511228 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.599184 4865 generic.go:334] "Generic (PLEG): container finished" podID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerID="a494e419553bb9e338dd8fbcd3b6faeb864316e9fb8416c77890d81a0a6a20b0" exitCode=0 Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.599256 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gzh6l" event={"ID":"8f13b08e-48a9-423c-ae8c-d1b13239074d","Type":"ContainerDied","Data":"a494e419553bb9e338dd8fbcd3b6faeb864316e9fb8416c77890d81a0a6a20b0"} Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.600963 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"a6b1b0eba2941eeb0825d0b03a6164c492659197949b6b6163a76c28e2d0b61a"} Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.602962 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" event={"ID":"2d08dd61-4283-4b88-bb60-4c37c35e070a","Type":"ContainerDied","Data":"90aaf6217ace04e740cf5ced4acc109561e6566553b2df509a758e9f411878df"} Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.602986 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6748474664-5fkzx" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.602993 4865 scope.go:117] "RemoveContainer" containerID="b272ae6a4fc81a3df3c441192e2a34271025083af39b615f644c1f7a939f0f79" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.621715 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.621522 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng" event={"ID":"aa64ba84-e782-4319-8cba-d478e214e200","Type":"ContainerDied","Data":"0030a8f85659948aab204898419c3694b08160c7732c0287240c0fb1c36c40cf"} Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.637705 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6748474664-5fkzx"] Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.640058 4865 scope.go:117] "RemoveContainer" containerID="5abe419e5441b4d1192c98148ed978fbece8cae38c83d27035e8cd18c27f1f5c" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.644438 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6748474664-5fkzx"] Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.653759 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng"] Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.656779 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65cd57845b-6gnng"] Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.860500 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.896478 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.953439 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4zc4"] Jan 23 11:56:24 crc kubenswrapper[4865]: I0123 11:56:24.965946 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr"] Jan 23 11:56:24 crc kubenswrapper[4865]: W0123 11:56:24.969010 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60877fc9_78f8_4298_8104_8cd90e28d3bd.slice/crio-2d7ad1f687a60ca160bd9afdb4bdc42383487ccc4899a246f63260f0a142d2ea WatchSource:0}: Error finding container 2d7ad1f687a60ca160bd9afdb4bdc42383487ccc4899a246f63260f0a142d2ea: Status 404 returned error can't find the container with id 2d7ad1f687a60ca160bd9afdb4bdc42383487ccc4899a246f63260f0a142d2ea Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.206705 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.252418 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.348157 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.440062 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-utilities\") pod \"8f13b08e-48a9-423c-ae8c-d1b13239074d\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.440383 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmxjp\" (UniqueName: \"kubernetes.io/projected/8f13b08e-48a9-423c-ae8c-d1b13239074d-kube-api-access-vmxjp\") pod \"8f13b08e-48a9-423c-ae8c-d1b13239074d\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.440471 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-catalog-content\") pod \"8f13b08e-48a9-423c-ae8c-d1b13239074d\" (UID: \"8f13b08e-48a9-423c-ae8c-d1b13239074d\") " Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.441034 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-utilities" (OuterVolumeSpecName: "utilities") pod "8f13b08e-48a9-423c-ae8c-d1b13239074d" (UID: "8f13b08e-48a9-423c-ae8c-d1b13239074d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.445397 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f13b08e-48a9-423c-ae8c-d1b13239074d-kube-api-access-vmxjp" (OuterVolumeSpecName: "kube-api-access-vmxjp") pod "8f13b08e-48a9-423c-ae8c-d1b13239074d" (UID: "8f13b08e-48a9-423c-ae8c-d1b13239074d"). InnerVolumeSpecName "kube-api-access-vmxjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.506671 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f13b08e-48a9-423c-ae8c-d1b13239074d" (UID: "8f13b08e-48a9-423c-ae8c-d1b13239074d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.541092 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.541309 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmxjp\" (UniqueName: \"kubernetes.io/projected/8f13b08e-48a9-423c-ae8c-d1b13239074d-kube-api-access-vmxjp\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.541389 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f13b08e-48a9-423c-ae8c-d1b13239074d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.629888 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" event={"ID":"60877fc9-78f8-4298-8104-8cd90e28d3bd","Type":"ContainerStarted","Data":"fe8e5fdd26caa016dbb63f464761d687129188d8bc9524e4503f2cdbb1d13171"} Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.629942 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" event={"ID":"60877fc9-78f8-4298-8104-8cd90e28d3bd","Type":"ContainerStarted","Data":"2d7ad1f687a60ca160bd9afdb4bdc42383487ccc4899a246f63260f0a142d2ea"} Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.629962 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.636018 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gzh6l" event={"ID":"8f13b08e-48a9-423c-ae8c-d1b13239074d","Type":"ContainerDied","Data":"9124fc91e0897ed300a7fab607e8da08d1fde4dde1fe8d45dc5c4455cc9fb3f3"} Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.636075 4865 scope.go:117] "RemoveContainer" containerID="a494e419553bb9e338dd8fbcd3b6faeb864316e9fb8416c77890d81a0a6a20b0" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.636174 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gzh6l" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.639108 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h4zc4" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerName="registry-server" containerID="cri-o://ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a" gracePeriod=2 Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.650324 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podStartSLOduration=6.650271077 podStartE2EDuration="6.650271077s" podCreationTimestamp="2026-01-23 11:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:56:25.64659309 +0000 UTC m=+229.815665326" watchObservedRunningTime="2026-01-23 11:56:25.650271077 +0000 UTC m=+229.819343313" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.655005 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.668913 4865 scope.go:117] "RemoveContainer" containerID="3dba2e3ecae649b301ef84aeb8fc87aaedb4147fcaa0c53e0c504d233de47bf1" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.689582 4865 scope.go:117] "RemoveContainer" containerID="02c3ac4becade0e703539b9cbabf97e2613016e3313f1b82eb72116da9e7b4d6" Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.699727 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gzh6l"] Jan 23 11:56:25 crc kubenswrapper[4865]: I0123 11:56:25.707347 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gzh6l"] Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.047138 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.126627 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d08dd61-4283-4b88-bb60-4c37c35e070a" path="/var/lib/kubelet/pods/2d08dd61-4283-4b88-bb60-4c37c35e070a/volumes" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.127754 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" path="/var/lib/kubelet/pods/8f13b08e-48a9-423c-ae8c-d1b13239074d/volumes" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.128522 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa64ba84-e782-4319-8cba-d478e214e200" path="/var/lib/kubelet/pods/aa64ba84-e782-4319-8cba-d478e214e200/volumes" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.200552 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79gr7\" (UniqueName: \"kubernetes.io/projected/777ae5a8-8d44-4d0d-a598-d1782fcc9585-kube-api-access-79gr7\") pod \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.200650 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-utilities\") pod \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.200698 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-catalog-content\") pod \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\" (UID: \"777ae5a8-8d44-4d0d-a598-d1782fcc9585\") " Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.201466 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-utilities" (OuterVolumeSpecName: "utilities") pod "777ae5a8-8d44-4d0d-a598-d1782fcc9585" (UID: "777ae5a8-8d44-4d0d-a598-d1782fcc9585"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.207268 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/777ae5a8-8d44-4d0d-a598-d1782fcc9585-kube-api-access-79gr7" (OuterVolumeSpecName: "kube-api-access-79gr7") pod "777ae5a8-8d44-4d0d-a598-d1782fcc9585" (UID: "777ae5a8-8d44-4d0d-a598-d1782fcc9585"). InnerVolumeSpecName "kube-api-access-79gr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.241200 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "777ae5a8-8d44-4d0d-a598-d1782fcc9585" (UID: "777ae5a8-8d44-4d0d-a598-d1782fcc9585"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.302660 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79gr7\" (UniqueName: \"kubernetes.io/projected/777ae5a8-8d44-4d0d-a598-d1782fcc9585-kube-api-access-79gr7\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.303370 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.303519 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/777ae5a8-8d44-4d0d-a598-d1782fcc9585-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.648218 4865 generic.go:334] "Generic (PLEG): container finished" podID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerID="ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a" exitCode=0 Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.648303 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zc4" event={"ID":"777ae5a8-8d44-4d0d-a598-d1782fcc9585","Type":"ContainerDied","Data":"ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a"} Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.648318 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4zc4" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.648336 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4zc4" event={"ID":"777ae5a8-8d44-4d0d-a598-d1782fcc9585","Type":"ContainerDied","Data":"f833e33df596fac61e47ea585899a711c32fe2b2fd25463a4fe016f2cd98113a"} Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.648354 4865 scope.go:117] "RemoveContainer" containerID="ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.666926 4865 scope.go:117] "RemoveContainer" containerID="9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.675691 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4zc4"] Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.687650 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h4zc4"] Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.704918 4865 scope.go:117] "RemoveContainer" containerID="abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714106 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f9669f7bd-ckgrk"] Jan 23 11:56:26 crc kubenswrapper[4865]: E0123 11:56:26.714399 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerName="registry-server" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714416 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerName="registry-server" Jan 23 11:56:26 crc kubenswrapper[4865]: E0123 11:56:26.714430 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerName="extract-utilities" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714436 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerName="extract-utilities" Jan 23 11:56:26 crc kubenswrapper[4865]: E0123 11:56:26.714446 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerName="extract-content" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714452 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerName="extract-content" Jan 23 11:56:26 crc kubenswrapper[4865]: E0123 11:56:26.714461 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerName="extract-content" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714467 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerName="extract-content" Jan 23 11:56:26 crc kubenswrapper[4865]: E0123 11:56:26.714475 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d08dd61-4283-4b88-bb60-4c37c35e070a" containerName="controller-manager" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714482 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d08dd61-4283-4b88-bb60-4c37c35e070a" containerName="controller-manager" Jan 23 11:56:26 crc kubenswrapper[4865]: E0123 11:56:26.714491 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerName="extract-utilities" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714497 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerName="extract-utilities" Jan 23 11:56:26 crc kubenswrapper[4865]: E0123 11:56:26.714506 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerName="registry-server" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714512 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerName="registry-server" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714621 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" containerName="registry-server" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714868 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d08dd61-4283-4b88-bb60-4c37c35e070a" containerName="controller-manager" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.714888 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f13b08e-48a9-423c-ae8c-d1b13239074d" containerName="registry-server" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.715343 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.717523 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.718220 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.729873 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f9669f7bd-ckgrk"] Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.730068 4865 scope.go:117] "RemoveContainer" containerID="ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.730893 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.731146 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.731373 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.731509 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 11:56:26 crc kubenswrapper[4865]: E0123 11:56:26.734210 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a\": container with ID starting with ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a not found: ID does not exist" containerID="ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.734256 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a"} err="failed to get container status \"ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a\": rpc error: code = NotFound desc = could not find container \"ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a\": container with ID starting with ff8376c72b2b2d3e2741b0ba26a59549f1d89355b667e27628bed32d1f55017a not found: ID does not exist" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.734293 4865 scope.go:117] "RemoveContainer" containerID="9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.735080 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 11:56:26 crc kubenswrapper[4865]: E0123 11:56:26.736074 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849\": container with ID starting with 9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849 not found: ID does not exist" containerID="9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.736194 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849"} err="failed to get container status \"9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849\": rpc error: code = NotFound desc = could not find container \"9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849\": container with ID starting with 9bbe6b60f6a38f41d185c32f339a8e81af4a99ffaf00d38f4fbf29bcd862d849 not found: ID does not exist" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.736293 4865 scope.go:117] "RemoveContainer" containerID="abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d" Jan 23 11:56:26 crc kubenswrapper[4865]: E0123 11:56:26.736757 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d\": container with ID starting with abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d not found: ID does not exist" containerID="abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.736792 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d"} err="failed to get container status \"abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d\": rpc error: code = NotFound desc = could not find container \"abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d\": container with ID starting with abc8042ac05993f5a0d5a83caca0f286ded9fe48d551b3833d5b1cfede359c9d not found: ID does not exist" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.911764 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbg5q\" (UniqueName: \"kubernetes.io/projected/97f32b90-08dc-4333-95e6-a2e85648931f-kube-api-access-dbg5q\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.911840 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f32b90-08dc-4333-95e6-a2e85648931f-config\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.911868 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f32b90-08dc-4333-95e6-a2e85648931f-client-ca\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.911910 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f32b90-08dc-4333-95e6-a2e85648931f-proxy-ca-bundles\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:26 crc kubenswrapper[4865]: I0123 11:56:26.911946 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f32b90-08dc-4333-95e6-a2e85648931f-serving-cert\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.013152 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f32b90-08dc-4333-95e6-a2e85648931f-serving-cert\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.013208 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbg5q\" (UniqueName: \"kubernetes.io/projected/97f32b90-08dc-4333-95e6-a2e85648931f-kube-api-access-dbg5q\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.013250 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f32b90-08dc-4333-95e6-a2e85648931f-config\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.013274 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f32b90-08dc-4333-95e6-a2e85648931f-client-ca\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.013306 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f32b90-08dc-4333-95e6-a2e85648931f-proxy-ca-bundles\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.014223 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/97f32b90-08dc-4333-95e6-a2e85648931f-proxy-ca-bundles\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.014822 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/97f32b90-08dc-4333-95e6-a2e85648931f-client-ca\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.015087 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97f32b90-08dc-4333-95e6-a2e85648931f-config\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.017759 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97f32b90-08dc-4333-95e6-a2e85648931f-serving-cert\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.035208 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbg5q\" (UniqueName: \"kubernetes.io/projected/97f32b90-08dc-4333-95e6-a2e85648931f-kube-api-access-dbg5q\") pod \"controller-manager-f9669f7bd-ckgrk\" (UID: \"97f32b90-08dc-4333-95e6-a2e85648931f\") " pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.055637 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.338253 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f9669f7bd-ckgrk"] Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.660267 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" event={"ID":"97f32b90-08dc-4333-95e6-a2e85648931f","Type":"ContainerStarted","Data":"37dcbfb1251c349e55c1773e3e0d576eb2f367905f8c52fe2b5019c7a357539c"} Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.968431 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvmlx"] Jan 23 11:56:27 crc kubenswrapper[4865]: I0123 11:56:27.969013 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xvmlx" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerName="registry-server" containerID="cri-o://c24f78a98d4c5f878ef8d78443f41933f409a879cd25c94a745043c88df9f97f" gracePeriod=2 Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.125591 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="777ae5a8-8d44-4d0d-a598-d1782fcc9585" path="/var/lib/kubelet/pods/777ae5a8-8d44-4d0d-a598-d1782fcc9585/volumes" Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.672862 4865 generic.go:334] "Generic (PLEG): container finished" podID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerID="c24f78a98d4c5f878ef8d78443f41933f409a879cd25c94a745043c88df9f97f" exitCode=0 Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.673256 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvmlx" event={"ID":"abe1c851-e34c-49e4-991b-cec1c55a25b6","Type":"ContainerDied","Data":"c24f78a98d4c5f878ef8d78443f41933f409a879cd25c94a745043c88df9f97f"} Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.674619 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" event={"ID":"97f32b90-08dc-4333-95e6-a2e85648931f","Type":"ContainerStarted","Data":"f7db30f04928c52c6d1185acbe8a775b6211677b2574d48e1b3cd288e7764e52"} Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.676343 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.681954 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.720397 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podStartSLOduration=9.72037691 podStartE2EDuration="9.72037691s" podCreationTimestamp="2026-01-23 11:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:56:28.697128144 +0000 UTC m=+232.866200360" watchObservedRunningTime="2026-01-23 11:56:28.72037691 +0000 UTC m=+232.889449136" Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.781773 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.939522 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzqhz\" (UniqueName: \"kubernetes.io/projected/abe1c851-e34c-49e4-991b-cec1c55a25b6-kube-api-access-fzqhz\") pod \"abe1c851-e34c-49e4-991b-cec1c55a25b6\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.939649 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-catalog-content\") pod \"abe1c851-e34c-49e4-991b-cec1c55a25b6\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.939684 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-utilities\") pod \"abe1c851-e34c-49e4-991b-cec1c55a25b6\" (UID: \"abe1c851-e34c-49e4-991b-cec1c55a25b6\") " Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.940357 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-utilities" (OuterVolumeSpecName: "utilities") pod "abe1c851-e34c-49e4-991b-cec1c55a25b6" (UID: "abe1c851-e34c-49e4-991b-cec1c55a25b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:56:28 crc kubenswrapper[4865]: I0123 11:56:28.946114 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe1c851-e34c-49e4-991b-cec1c55a25b6-kube-api-access-fzqhz" (OuterVolumeSpecName: "kube-api-access-fzqhz") pod "abe1c851-e34c-49e4-991b-cec1c55a25b6" (UID: "abe1c851-e34c-49e4-991b-cec1c55a25b6"). InnerVolumeSpecName "kube-api-access-fzqhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.041457 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzqhz\" (UniqueName: \"kubernetes.io/projected/abe1c851-e34c-49e4-991b-cec1c55a25b6-kube-api-access-fzqhz\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.041497 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.073423 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abe1c851-e34c-49e4-991b-cec1c55a25b6" (UID: "abe1c851-e34c-49e4-991b-cec1c55a25b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.142505 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abe1c851-e34c-49e4-991b-cec1c55a25b6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.686393 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xvmlx" Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.686373 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xvmlx" event={"ID":"abe1c851-e34c-49e4-991b-cec1c55a25b6","Type":"ContainerDied","Data":"63a811f6410c07885d1cd405663b6d4cbd93641f1e42e6b55da8e17a0abc1618"} Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.686876 4865 scope.go:117] "RemoveContainer" containerID="c24f78a98d4c5f878ef8d78443f41933f409a879cd25c94a745043c88df9f97f" Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.727189 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xvmlx"] Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.731911 4865 scope.go:117] "RemoveContainer" containerID="65438d8bb6b1cae701668ef39ea835d20c87c4686de82a798af131dfa01ce3b0" Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.736151 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xvmlx"] Jan 23 11:56:29 crc kubenswrapper[4865]: I0123 11:56:29.756082 4865 scope.go:117] "RemoveContainer" containerID="c651ee5a776d2358c9284b638628ef0105b705cbb421bbf3962b3ffa0a6e5182" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.126721 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" path="/var/lib/kubelet/pods/abe1c851-e34c-49e4-991b-cec1c55a25b6/volumes" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.939072 4865 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.939520 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerName="registry-server" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.939534 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerName="registry-server" Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.939547 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerName="extract-utilities" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.939568 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerName="extract-utilities" Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.939583 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerName="extract-content" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.939590 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerName="extract-content" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.939743 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe1c851-e34c-49e4-991b-cec1c55a25b6" containerName="registry-server" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.940087 4865 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.940214 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.940348 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c" gracePeriod=15 Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.940444 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2" gracePeriod=15 Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.940506 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e" gracePeriod=15 Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.940550 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c" gracePeriod=15 Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.940985 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3" gracePeriod=15 Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.941418 4865 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.941939 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.941953 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.941963 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.941970 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.941977 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.941983 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.941992 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.941999 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.942021 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.942026 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.942038 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.942044 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.942146 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.942160 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.942169 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.942177 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.942184 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.942285 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.942294 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 11:56:30 crc kubenswrapper[4865]: I0123 11:56:30.942391 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 11:56:30 crc kubenswrapper[4865]: E0123 11:56:30.976306 4865 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.069706 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.069807 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.069835 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.069956 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.070008 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.070088 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.070178 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.070205 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.106921 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.107002 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172051 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172114 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172137 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172161 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172234 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172261 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172283 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172427 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172490 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172513 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172554 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172573 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172612 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172644 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.172664 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.277346 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: W0123 11:56:31.300751 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-5aa1a2d5def20a426961db61490d68b7c98728ea08fcf2f1b7640d9a7d0501aa WatchSource:0}: Error finding container 5aa1a2d5def20a426961db61490d68b7c98728ea08fcf2f1b7640d9a7d0501aa: Status 404 returned error can't find the container with id 5aa1a2d5def20a426961db61490d68b7c98728ea08fcf2f1b7640d9a7d0501aa Jan 23 11:56:31 crc kubenswrapper[4865]: E0123 11:56:31.304134 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d5a301c1d6b6c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 11:56:31.3030931 +0000 UTC m=+235.472165326,LastTimestamp:2026-01-23 11:56:31.3030931 +0000 UTC m=+235.472165326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.712397 4865 generic.go:334] "Generic (PLEG): container finished" podID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" containerID="27c7474ed12182ee2eb1a65ca6f595ad4fb920e3b5783b320b6589c58a0ab957" exitCode=0 Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.712477 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"886599e1-f8ad-4ffd-9b2b-7db39dec28ee","Type":"ContainerDied","Data":"27c7474ed12182ee2eb1a65ca6f595ad4fb920e3b5783b320b6589c58a0ab957"} Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.714724 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.715361 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.717005 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"df381d27e123f9a846e1510a8c97673fbe3ddef3562ca908acda6796694ccf46"} Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.717039 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"5aa1a2d5def20a426961db61490d68b7c98728ea08fcf2f1b7640d9a7d0501aa"} Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.717636 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:31 crc kubenswrapper[4865]: E0123 11:56:31.717790 4865 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.718477 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.721376 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.722893 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.724050 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c" exitCode=0 Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.724074 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2" exitCode=0 Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.724083 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3" exitCode=0 Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.724092 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e" exitCode=2 Jan 23 11:56:31 crc kubenswrapper[4865]: I0123 11:56:31.724127 4865 scope.go:117] "RemoveContainer" containerID="93c9d245dcc8d99fcc5d60e02dee8a648e98d63a6980dfc9ba215468a2fddce2" Jan 23 11:56:32 crc kubenswrapper[4865]: I0123 11:56:32.736175 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 11:56:32 crc kubenswrapper[4865]: E0123 11:56:32.983990 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d5a301c1d6b6c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 11:56:31.3030931 +0000 UTC m=+235.472165326,LastTimestamp:2026-01-23 11:56:31.3030931 +0000 UTC m=+235.472165326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.027841 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.028531 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.200851 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kubelet-dir\") pod \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.200925 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kube-api-access\") pod \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.200990 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-var-lock\") pod \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\" (UID: \"886599e1-f8ad-4ffd-9b2b-7db39dec28ee\") " Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.201245 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "886599e1-f8ad-4ffd-9b2b-7db39dec28ee" (UID: "886599e1-f8ad-4ffd-9b2b-7db39dec28ee"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.201332 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-var-lock" (OuterVolumeSpecName: "var-lock") pod "886599e1-f8ad-4ffd-9b2b-7db39dec28ee" (UID: "886599e1-f8ad-4ffd-9b2b-7db39dec28ee"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.207318 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "886599e1-f8ad-4ffd-9b2b-7db39dec28ee" (UID: "886599e1-f8ad-4ffd-9b2b-7db39dec28ee"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.302365 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.302403 4865 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.302412 4865 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/886599e1-f8ad-4ffd-9b2b-7db39dec28ee-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.746104 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.747138 4865 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c" exitCode=0 Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.748678 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"886599e1-f8ad-4ffd-9b2b-7db39dec28ee","Type":"ContainerDied","Data":"8761873b074db20c12bac1a971f534b5645bfb6a99867d4c2ec5993d665bd747"} Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.748717 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8761873b074db20c12bac1a971f534b5645bfb6a99867d4c2ec5993d665bd747" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.748722 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.760492 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.811352 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.812714 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.813166 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.813425 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.909417 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.909499 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.909577 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.909728 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.909758 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.909774 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.910245 4865 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.910275 4865 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:33 crc kubenswrapper[4865]: I0123 11:56:33.910289 4865 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.124430 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.755751 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.756987 4865 scope.go:117] "RemoveContainer" containerID="08337b4fbe79a7d04aa4896aab88e36815ec96d95492d8bce802178e6424456c" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.757046 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.757755 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.758795 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.760217 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.760365 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.771596 4865 scope.go:117] "RemoveContainer" containerID="3ae80299af7924ec05629a51566973afe43a0686e6a36340bc6d9d3c73c1a1f2" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.790834 4865 scope.go:117] "RemoveContainer" containerID="fcacc710ffa75bc2555652a38cfd9568caadab8634e7f93a685b6a55f6f3fab3" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.817107 4865 scope.go:117] "RemoveContainer" containerID="fe4dc3d9e734f251cc6ba95bc89891c75b90e550209ecd5af6714dbf974f075e" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.833028 4865 scope.go:117] "RemoveContainer" containerID="70eaa9333790f163b77fd949b97b225acb1948e1eb40c41c34791afb0ba9a39c" Jan 23 11:56:34 crc kubenswrapper[4865]: I0123 11:56:34.854936 4865 scope.go:117] "RemoveContainer" containerID="daae6fdc91138a1c2250074c7a1ffb86d1c8417d0f0e816dd44552f5954ee2a6" Jan 23 11:56:36 crc kubenswrapper[4865]: I0123 11:56:36.128928 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:36 crc kubenswrapper[4865]: I0123 11:56:36.129429 4865 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:39 crc kubenswrapper[4865]: E0123 11:56:39.522355 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:39 crc kubenswrapper[4865]: E0123 11:56:39.523460 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:39 crc kubenswrapper[4865]: E0123 11:56:39.524211 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:39 crc kubenswrapper[4865]: E0123 11:56:39.524703 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:39 crc kubenswrapper[4865]: E0123 11:56:39.525072 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:39 crc kubenswrapper[4865]: I0123 11:56:39.525116 4865 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 11:56:39 crc kubenswrapper[4865]: E0123 11:56:39.525553 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Jan 23 11:56:39 crc kubenswrapper[4865]: E0123 11:56:39.726186 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Jan 23 11:56:40 crc kubenswrapper[4865]: E0123 11:56:40.127269 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Jan 23 11:56:40 crc kubenswrapper[4865]: E0123 11:56:40.928646 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Jan 23 11:56:42 crc kubenswrapper[4865]: E0123 11:56:42.530170 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Jan 23 11:56:42 crc kubenswrapper[4865]: E0123 11:56:42.986206 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d5a301c1d6b6c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 11:56:31.3030931 +0000 UTC m=+235.472165326,LastTimestamp:2026-01-23 11:56:31.3030931 +0000 UTC m=+235.472165326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.118330 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.120788 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.142309 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.142347 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb" Jan 23 11:56:44 crc kubenswrapper[4865]: E0123 11:56:44.142948 4865 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.143714 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:44 crc kubenswrapper[4865]: W0123 11:56:44.178071 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-8ed0ac881b28032496be1ea608a3c0552994437fd62e3c459a4ad94efb7e86c2 WatchSource:0}: Error finding container 8ed0ac881b28032496be1ea608a3c0552994437fd62e3c459a4ad94efb7e86c2: Status 404 returned error can't find the container with id 8ed0ac881b28032496be1ea608a3c0552994437fd62e3c459a4ad94efb7e86c2 Jan 23 11:56:44 crc kubenswrapper[4865]: E0123 11:56:44.189180 4865 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" volumeName="registry-storage" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.820647 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.821160 4865 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4" exitCode=1 Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.821260 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4"} Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.822133 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.822155 4865 scope.go:117] "RemoveContainer" containerID="48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.822422 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.825042 4865 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="59f195e40c91126f07765a1e95b3be59035150644ff74c7f872fe2417bc069c0" exitCode=0 Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.825082 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"59f195e40c91126f07765a1e95b3be59035150644ff74c7f872fe2417bc069c0"} Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.825113 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8ed0ac881b28032496be1ea608a3c0552994437fd62e3c459a4ad94efb7e86c2"} Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.825384 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.825405 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.825722 4865 status_manager.go:851] "Failed to get status for pod" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:44 crc kubenswrapper[4865]: E0123 11:56:44.825931 4865 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:44 crc kubenswrapper[4865]: I0123 11:56:44.826703 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 11:56:45 crc kubenswrapper[4865]: I0123 11:56:45.733554 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:56:45 crc kubenswrapper[4865]: I0123 11:56:45.840585 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d84f37524e750d3b3ca40614081b5115a85aa1f0645faf2b81e171493e55fd76"} Jan 23 11:56:45 crc kubenswrapper[4865]: I0123 11:56:45.840660 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fcfacea1c0fe286ff9333bef7e4113d626a1339eb8cf92c77307568a5e09fa12"} Jan 23 11:56:45 crc kubenswrapper[4865]: I0123 11:56:45.840679 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eee024785017ad7bd5569801f2f88238a8ed019d823b61baf5bc3c8c84d6284f"} Jan 23 11:56:45 crc kubenswrapper[4865]: I0123 11:56:45.840692 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5a66ef84b06b7efe6172f489b5b4f6cad35b034791d3c7a4bbff3f436519ae0e"} Jan 23 11:56:45 crc kubenswrapper[4865]: I0123 11:56:45.843400 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 11:56:45 crc kubenswrapper[4865]: I0123 11:56:45.843467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"716198fba845e9e3bc3c1765621977f292a6e8ecb4f116f58a73e19b9cb9cabf"} Jan 23 11:56:46 crc kubenswrapper[4865]: I0123 11:56:46.852745 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb" Jan 23 11:56:46 crc kubenswrapper[4865]: I0123 11:56:46.853034 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb" Jan 23 11:56:46 crc kubenswrapper[4865]: I0123 11:56:46.852904 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"37c95bde45d0cea1ba3a5d366c78427256ccc8bfee4caf25b2b6627112721b29"} Jan 23 11:56:46 crc kubenswrapper[4865]: I0123 11:56:46.853258 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:49 crc kubenswrapper[4865]: I0123 11:56:49.144109 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:49 crc kubenswrapper[4865]: I0123 11:56:49.144708 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:49 crc kubenswrapper[4865]: I0123 11:56:49.151878 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:50 crc kubenswrapper[4865]: I0123 11:56:50.439827 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:56:50 crc kubenswrapper[4865]: I0123 11:56:50.447075 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:56:50 crc kubenswrapper[4865]: I0123 11:56:50.886818 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:56:51 crc kubenswrapper[4865]: I0123 11:56:51.861054 4865 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:51 crc kubenswrapper[4865]: I0123 11:56:51.891586 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb" Jan 23 11:56:51 crc kubenswrapper[4865]: I0123 11:56:51.891630 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb" Jan 23 11:56:51 crc kubenswrapper[4865]: I0123 11:56:51.895507 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:56:51 crc kubenswrapper[4865]: I0123 11:56:51.897849 4865 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="21b8d852-26ac-44af-9e36-492c440ad61a" Jan 23 11:56:52 crc kubenswrapper[4865]: I0123 11:56:52.895303 4865 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb" Jan 23 11:56:52 crc kubenswrapper[4865]: I0123 11:56:52.896523 4865 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="be3c59f5-d1f9-44d2-a3e7-d397da8bb6bb" Jan 23 11:56:56 crc kubenswrapper[4865]: I0123 11:56:56.131714 4865 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="21b8d852-26ac-44af-9e36-492c440ad61a" Jan 23 11:57:00 crc kubenswrapper[4865]: I0123 11:57:00.826544 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 11:57:01 crc kubenswrapper[4865]: I0123 11:57:01.016940 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 11:57:01 crc kubenswrapper[4865]: I0123 11:57:01.927018 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 11:57:02 crc kubenswrapper[4865]: I0123 11:57:02.134653 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 11:57:02 crc kubenswrapper[4865]: I0123 11:57:02.270955 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 11:57:02 crc kubenswrapper[4865]: I0123 11:57:02.325202 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 11:57:02 crc kubenswrapper[4865]: I0123 11:57:02.409064 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 11:57:02 crc kubenswrapper[4865]: I0123 11:57:02.514500 4865 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 11:57:02 crc kubenswrapper[4865]: I0123 11:57:02.721593 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 11:57:02 crc kubenswrapper[4865]: I0123 11:57:02.777561 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 11:57:03 crc kubenswrapper[4865]: I0123 11:57:03.113086 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 11:57:03 crc kubenswrapper[4865]: I0123 11:57:03.220105 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 11:57:03 crc kubenswrapper[4865]: I0123 11:57:03.240524 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 11:57:03 crc kubenswrapper[4865]: I0123 11:57:03.353557 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 11:57:03 crc kubenswrapper[4865]: I0123 11:57:03.400528 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 11:57:03 crc kubenswrapper[4865]: I0123 11:57:03.466526 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 11:57:03 crc kubenswrapper[4865]: I0123 11:57:03.651727 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 11:57:03 crc kubenswrapper[4865]: I0123 11:57:03.661524 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 11:57:03 crc kubenswrapper[4865]: I0123 11:57:03.804995 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 11:57:03 crc kubenswrapper[4865]: I0123 11:57:03.818174 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.017662 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.022245 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.133469 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.245712 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.381509 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.471456 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.509700 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.695376 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.708072 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.710239 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 11:57:04 crc kubenswrapper[4865]: I0123 11:57:04.956632 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 11:57:05 crc kubenswrapper[4865]: I0123 11:57:05.182470 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 11:57:05 crc kubenswrapper[4865]: I0123 11:57:05.368430 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 11:57:05 crc kubenswrapper[4865]: I0123 11:57:05.459135 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 11:57:05 crc kubenswrapper[4865]: I0123 11:57:05.588240 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 11:57:05 crc kubenswrapper[4865]: I0123 11:57:05.694673 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 11:57:05 crc kubenswrapper[4865]: I0123 11:57:05.787889 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 11:57:05 crc kubenswrapper[4865]: I0123 11:57:05.855839 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 11:57:05 crc kubenswrapper[4865]: I0123 11:57:05.966182 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.030866 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.047890 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.068818 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.130217 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.131033 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.192127 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.200177 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.202566 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.241077 4865 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.328234 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.341238 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.372890 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.705986 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.729692 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.793960 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.797260 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 11:57:06 crc kubenswrapper[4865]: I0123 11:57:06.855824 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.005516 4865 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.035802 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.135818 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.151183 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.162348 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.224024 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.239327 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.251379 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.315797 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.348486 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.402003 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.404791 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.484798 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.603113 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.603191 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.604024 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.605533 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.606495 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.613173 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.649306 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.668330 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.672973 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.679388 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.751943 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.782545 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.848115 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.860262 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.864786 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.887536 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.919006 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 11:57:07 crc kubenswrapper[4865]: I0123 11:57:07.976577 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.035338 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.100876 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.150093 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.214533 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.215019 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.277909 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.299584 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.309972 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.366526 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.428059 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.446220 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.635428 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.642701 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.648249 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.655087 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.695845 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.731038 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.766767 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.783712 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.851094 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.851519 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.851896 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 11:57:08 crc kubenswrapper[4865]: I0123 11:57:08.967159 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.035582 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.157565 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.184332 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.275145 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.340723 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.564379 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.582168 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.608531 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.694176 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.757383 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.810554 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.820923 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.856420 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.896225 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.950571 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.958083 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 11:57:09 crc kubenswrapper[4865]: I0123 11:57:09.978209 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.242638 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.397810 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.411276 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.427029 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.431771 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.471396 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.569626 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.623945 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.743208 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.829642 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.861229 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.935762 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.938157 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.961159 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.966588 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 11:57:10 crc kubenswrapper[4865]: I0123 11:57:10.995487 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.003178 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.009922 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.046252 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.112914 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.172188 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.233558 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.326748 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.328486 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.357314 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.415464 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.466510 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.500886 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.501789 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.542878 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.599758 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.668721 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.712672 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.799083 4865 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.828155 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.855161 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 11:57:11 crc kubenswrapper[4865]: I0123 11:57:11.903722 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.002885 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.018860 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.148295 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.149977 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.244097 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.399216 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.454662 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.455524 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.551706 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.610461 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.644680 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.669256 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.705300 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.723822 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.816898 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.864499 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 11:57:12 crc kubenswrapper[4865]: I0123 11:57:12.920432 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.067438 4865 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.242342 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.304531 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.310912 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.356363 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.537344 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.549121 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.557555 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.635590 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.710036 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.751571 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.817664 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.889876 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.956153 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 11:57:13 crc kubenswrapper[4865]: I0123 11:57:13.977167 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.056644 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.114085 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.140493 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.184651 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.204893 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.268145 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.303503 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.404899 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.415340 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.543538 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.658944 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.852424 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.952304 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 11:57:14 crc kubenswrapper[4865]: I0123 11:57:14.992323 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.093818 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.165476 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.197927 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.375789 4865 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.380490 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.382300 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.382384 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.386839 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.408399 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.40835444 podStartE2EDuration="24.40835444s" podCreationTimestamp="2026-01-23 11:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:57:15.398809285 +0000 UTC m=+279.567881511" watchObservedRunningTime="2026-01-23 11:57:15.40835444 +0000 UTC m=+279.577426666" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.434922 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.440310 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.529301 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.641525 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.726992 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.779546 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.779692 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 11:57:15 crc kubenswrapper[4865]: I0123 11:57:15.881438 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.044915 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.137939 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.150234 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.245736 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.255942 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.289076 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.326669 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.406133 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.409367 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.466110 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.485755 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.526130 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.624624 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.634096 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.639894 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 11:57:16 crc kubenswrapper[4865]: I0123 11:57:16.900153 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 11:57:17 crc kubenswrapper[4865]: I0123 11:57:17.037373 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 11:57:17 crc kubenswrapper[4865]: I0123 11:57:17.288137 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 11:57:17 crc kubenswrapper[4865]: I0123 11:57:17.314316 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 11:57:17 crc kubenswrapper[4865]: I0123 11:57:17.534070 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 11:57:17 crc kubenswrapper[4865]: I0123 11:57:17.557330 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 11:57:17 crc kubenswrapper[4865]: I0123 11:57:17.623864 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 11:57:17 crc kubenswrapper[4865]: I0123 11:57:17.753414 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 11:57:17 crc kubenswrapper[4865]: I0123 11:57:17.901984 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.045309 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.062815 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.272679 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5sbt"] Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.273116 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s5sbt" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" containerName="registry-server" containerID="cri-o://86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919" gracePeriod=30 Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.283224 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f5nf9"] Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.283591 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f5nf9" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerName="registry-server" containerID="cri-o://ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510" gracePeriod=30 Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.301730 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mwzzv"] Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.302085 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" podUID="401d6c1a-be67-4fb7-97f6-d46e3ba35960" containerName="marketplace-operator" containerID="cri-o://5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53" gracePeriod=30 Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.324088 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9w54z"] Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.324515 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9w54z" podUID="778153be-8013-460c-8000-e58ba9f45cd9" containerName="registry-server" containerID="cri-o://5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b" gracePeriod=30 Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.328683 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l76kv"] Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.329105 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l76kv" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerName="registry-server" containerID="cri-o://a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991" gracePeriod=30 Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.359241 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xpgm"] Jan 23 11:57:18 crc kubenswrapper[4865]: E0123 11:57:18.359700 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" containerName="installer" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.359786 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" containerName="installer" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.359990 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="886599e1-f8ad-4ffd-9b2b-7db39dec28ee" containerName="installer" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.360718 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.366942 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xpgm"] Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.433864 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/189c80ac-7038-4b48-bebb-5c5d7e2cd362-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7xpgm\" (UID: \"189c80ac-7038-4b48-bebb-5c5d7e2cd362\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.436504 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/189c80ac-7038-4b48-bebb-5c5d7e2cd362-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7xpgm\" (UID: \"189c80ac-7038-4b48-bebb-5c5d7e2cd362\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.436700 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swf7c\" (UniqueName: \"kubernetes.io/projected/189c80ac-7038-4b48-bebb-5c5d7e2cd362-kube-api-access-swf7c\") pod \"marketplace-operator-79b997595-7xpgm\" (UID: \"189c80ac-7038-4b48-bebb-5c5d7e2cd362\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.512711 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.538235 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/189c80ac-7038-4b48-bebb-5c5d7e2cd362-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7xpgm\" (UID: \"189c80ac-7038-4b48-bebb-5c5d7e2cd362\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.538320 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/189c80ac-7038-4b48-bebb-5c5d7e2cd362-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7xpgm\" (UID: \"189c80ac-7038-4b48-bebb-5c5d7e2cd362\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.538363 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swf7c\" (UniqueName: \"kubernetes.io/projected/189c80ac-7038-4b48-bebb-5c5d7e2cd362-kube-api-access-swf7c\") pod \"marketplace-operator-79b997595-7xpgm\" (UID: \"189c80ac-7038-4b48-bebb-5c5d7e2cd362\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.540046 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/189c80ac-7038-4b48-bebb-5c5d7e2cd362-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-7xpgm\" (UID: \"189c80ac-7038-4b48-bebb-5c5d7e2cd362\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.561727 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/189c80ac-7038-4b48-bebb-5c5d7e2cd362-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-7xpgm\" (UID: \"189c80ac-7038-4b48-bebb-5c5d7e2cd362\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.564577 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swf7c\" (UniqueName: \"kubernetes.io/projected/189c80ac-7038-4b48-bebb-5c5d7e2cd362-kube-api-access-swf7c\") pod \"marketplace-operator-79b997595-7xpgm\" (UID: \"189c80ac-7038-4b48-bebb-5c5d7e2cd362\") " pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.686343 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.720951 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.739867 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzwzn\" (UniqueName: \"kubernetes.io/projected/2d701fdb-266c-4e83-a0b6-099bfd0987a9-kube-api-access-mzwzn\") pod \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.739928 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-catalog-content\") pod \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.739959 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-utilities\") pod \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\" (UID: \"2d701fdb-266c-4e83-a0b6-099bfd0987a9\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.742373 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-utilities" (OuterVolumeSpecName: "utilities") pod "2d701fdb-266c-4e83-a0b6-099bfd0987a9" (UID: "2d701fdb-266c-4e83-a0b6-099bfd0987a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.748100 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d701fdb-266c-4e83-a0b6-099bfd0987a9-kube-api-access-mzwzn" (OuterVolumeSpecName: "kube-api-access-mzwzn") pod "2d701fdb-266c-4e83-a0b6-099bfd0987a9" (UID: "2d701fdb-266c-4e83-a0b6-099bfd0987a9"). InnerVolumeSpecName "kube-api-access-mzwzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.822511 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d701fdb-266c-4e83-a0b6-099bfd0987a9" (UID: "2d701fdb-266c-4e83-a0b6-099bfd0987a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.843291 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzwzn\" (UniqueName: \"kubernetes.io/projected/2d701fdb-266c-4e83-a0b6-099bfd0987a9-kube-api-access-mzwzn\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.843326 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.843339 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d701fdb-266c-4e83-a0b6-099bfd0987a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.903248 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.906290 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.908008 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.912237 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.932133 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.943880 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-catalog-content\") pod \"778153be-8013-460c-8000-e58ba9f45cd9\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.943921 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-operator-metrics\") pod \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.943951 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5t94\" (UniqueName: \"kubernetes.io/projected/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-kube-api-access-g5t94\") pod \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.943978 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-trusted-ca\") pod \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.944000 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-utilities\") pod \"778153be-8013-460c-8000-e58ba9f45cd9\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.944018 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-catalog-content\") pod \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.944040 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zk56\" (UniqueName: \"kubernetes.io/projected/752a7b3b-7850-4bba-b8ce-be070452a538-kube-api-access-5zk56\") pod \"752a7b3b-7850-4bba-b8ce-be070452a538\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.944060 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-catalog-content\") pod \"752a7b3b-7850-4bba-b8ce-be070452a538\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.944082 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn69j\" (UniqueName: \"kubernetes.io/projected/401d6c1a-be67-4fb7-97f6-d46e3ba35960-kube-api-access-zn69j\") pod \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\" (UID: \"401d6c1a-be67-4fb7-97f6-d46e3ba35960\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.944100 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-utilities\") pod \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\" (UID: \"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.944125 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bndwm\" (UniqueName: \"kubernetes.io/projected/778153be-8013-460c-8000-e58ba9f45cd9-kube-api-access-bndwm\") pod \"778153be-8013-460c-8000-e58ba9f45cd9\" (UID: \"778153be-8013-460c-8000-e58ba9f45cd9\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.944148 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-utilities\") pod \"752a7b3b-7850-4bba-b8ce-be070452a538\" (UID: \"752a7b3b-7850-4bba-b8ce-be070452a538\") " Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.948554 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-utilities" (OuterVolumeSpecName: "utilities") pod "e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" (UID: "e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.951958 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "401d6c1a-be67-4fb7-97f6-d46e3ba35960" (UID: "401d6c1a-be67-4fb7-97f6-d46e3ba35960"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.952672 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-utilities" (OuterVolumeSpecName: "utilities") pod "778153be-8013-460c-8000-e58ba9f45cd9" (UID: "778153be-8013-460c-8000-e58ba9f45cd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.953165 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401d6c1a-be67-4fb7-97f6-d46e3ba35960-kube-api-access-zn69j" (OuterVolumeSpecName: "kube-api-access-zn69j") pod "401d6c1a-be67-4fb7-97f6-d46e3ba35960" (UID: "401d6c1a-be67-4fb7-97f6-d46e3ba35960"). InnerVolumeSpecName "kube-api-access-zn69j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.954408 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/778153be-8013-460c-8000-e58ba9f45cd9-kube-api-access-bndwm" (OuterVolumeSpecName: "kube-api-access-bndwm") pod "778153be-8013-460c-8000-e58ba9f45cd9" (UID: "778153be-8013-460c-8000-e58ba9f45cd9"). InnerVolumeSpecName "kube-api-access-bndwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.955832 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-utilities" (OuterVolumeSpecName: "utilities") pod "752a7b3b-7850-4bba-b8ce-be070452a538" (UID: "752a7b3b-7850-4bba-b8ce-be070452a538"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.957061 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "401d6c1a-be67-4fb7-97f6-d46e3ba35960" (UID: "401d6c1a-be67-4fb7-97f6-d46e3ba35960"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.963751 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-kube-api-access-g5t94" (OuterVolumeSpecName: "kube-api-access-g5t94") pod "e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" (UID: "e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9"). InnerVolumeSpecName "kube-api-access-g5t94". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:57:18 crc kubenswrapper[4865]: I0123 11:57:18.965252 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/752a7b3b-7850-4bba-b8ce-be070452a538-kube-api-access-5zk56" (OuterVolumeSpecName: "kube-api-access-5zk56") pod "752a7b3b-7850-4bba-b8ce-be070452a538" (UID: "752a7b3b-7850-4bba-b8ce-be070452a538"). InnerVolumeSpecName "kube-api-access-5zk56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.000954 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "778153be-8013-460c-8000-e58ba9f45cd9" (UID: "778153be-8013-460c-8000-e58ba9f45cd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.031569 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "752a7b3b-7850-4bba-b8ce-be070452a538" (UID: "752a7b3b-7850-4bba-b8ce-be070452a538"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045138 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zk56\" (UniqueName: \"kubernetes.io/projected/752a7b3b-7850-4bba-b8ce-be070452a538-kube-api-access-5zk56\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045282 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045353 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn69j\" (UniqueName: \"kubernetes.io/projected/401d6c1a-be67-4fb7-97f6-d46e3ba35960-kube-api-access-zn69j\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045410 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045472 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bndwm\" (UniqueName: \"kubernetes.io/projected/778153be-8013-460c-8000-e58ba9f45cd9-kube-api-access-bndwm\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045528 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/752a7b3b-7850-4bba-b8ce-be070452a538-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045581 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045693 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045763 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5t94\" (UniqueName: \"kubernetes.io/projected/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-kube-api-access-g5t94\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045819 4865 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/401d6c1a-be67-4fb7-97f6-d46e3ba35960-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.045880 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/778153be-8013-460c-8000-e58ba9f45cd9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.056737 4865 generic.go:334] "Generic (PLEG): container finished" podID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerID="ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510" exitCode=0 Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.056798 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nf9" event={"ID":"2d701fdb-266c-4e83-a0b6-099bfd0987a9","Type":"ContainerDied","Data":"ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510"} Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.056829 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5nf9" event={"ID":"2d701fdb-266c-4e83-a0b6-099bfd0987a9","Type":"ContainerDied","Data":"c9024586abf232fb9a3fe5420c4d12b3e7dbe1059207d2874bb4c242b970fc5c"} Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.056845 4865 scope.go:117] "RemoveContainer" containerID="ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.056947 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5nf9" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.064893 4865 generic.go:334] "Generic (PLEG): container finished" podID="778153be-8013-460c-8000-e58ba9f45cd9" containerID="5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b" exitCode=0 Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.065123 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w54z" event={"ID":"778153be-8013-460c-8000-e58ba9f45cd9","Type":"ContainerDied","Data":"5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b"} Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.065233 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9w54z" event={"ID":"778153be-8013-460c-8000-e58ba9f45cd9","Type":"ContainerDied","Data":"ed9f8d60172867512a36b9d9c0d77505c413c7ac64bbfd5b1ac1e8f9c8c1d271"} Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.065390 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9w54z" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.075457 4865 generic.go:334] "Generic (PLEG): container finished" podID="752a7b3b-7850-4bba-b8ce-be070452a538" containerID="86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919" exitCode=0 Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.075543 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5sbt" event={"ID":"752a7b3b-7850-4bba-b8ce-be070452a538","Type":"ContainerDied","Data":"86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919"} Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.075580 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5sbt" event={"ID":"752a7b3b-7850-4bba-b8ce-be070452a538","Type":"ContainerDied","Data":"78b3d1a68fc1f9f013c5828a4250fe27c4058206b9afc07c2690c4d05a0f98e0"} Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.075685 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5sbt" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.085364 4865 generic.go:334] "Generic (PLEG): container finished" podID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerID="a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991" exitCode=0 Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.085438 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l76kv" event={"ID":"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9","Type":"ContainerDied","Data":"a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991"} Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.085467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l76kv" event={"ID":"e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9","Type":"ContainerDied","Data":"ede125b937d3d5f20bdbe3fa666380f2293bd8109591742ec052a22c7c13092d"} Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.085587 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l76kv" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.087503 4865 generic.go:334] "Generic (PLEG): container finished" podID="401d6c1a-be67-4fb7-97f6-d46e3ba35960" containerID="5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53" exitCode=0 Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.087536 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" event={"ID":"401d6c1a-be67-4fb7-97f6-d46e3ba35960","Type":"ContainerDied","Data":"5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53"} Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.087560 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" event={"ID":"401d6c1a-be67-4fb7-97f6-d46e3ba35960","Type":"ContainerDied","Data":"a784fa8d8251162adb391b8b38a65f4d4267e5d4f1110c96bc957680cb44cd3d"} Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.087770 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mwzzv" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.096918 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" (UID: "e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.100878 4865 scope.go:117] "RemoveContainer" containerID="7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.130255 4865 scope.go:117] "RemoveContainer" containerID="68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.132692 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9w54z"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.140900 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9w54z"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.147768 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.159256 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f5nf9"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.160823 4865 scope.go:117] "RemoveContainer" containerID="ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.161268 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510\": container with ID starting with ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510 not found: ID does not exist" containerID="ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.161342 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510"} err="failed to get container status \"ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510\": rpc error: code = NotFound desc = could not find container \"ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510\": container with ID starting with ea89b74383d07ec6651c7c9ef822b3b9c99b1f95ee8bfc1f135c4b22b9b26510 not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.161976 4865 scope.go:117] "RemoveContainer" containerID="7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.162394 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9\": container with ID starting with 7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9 not found: ID does not exist" containerID="7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.162433 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9"} err="failed to get container status \"7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9\": rpc error: code = NotFound desc = could not find container \"7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9\": container with ID starting with 7cabe49b196dd125b6536db907bdc2de9ac941093e5ff77bbc8092a9210e0be9 not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.162472 4865 scope.go:117] "RemoveContainer" containerID="68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.162780 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec\": container with ID starting with 68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec not found: ID does not exist" containerID="68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.162836 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec"} err="failed to get container status \"68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec\": rpc error: code = NotFound desc = could not find container \"68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec\": container with ID starting with 68bb8b4014767e16315e5b3078e08404ddd47c5d34f48c79a272af17c3e494ec not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.162853 4865 scope.go:117] "RemoveContainer" containerID="5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.168825 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f5nf9"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.173487 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s5sbt"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.175270 4865 scope.go:117] "RemoveContainer" containerID="7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.178040 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s5sbt"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.182015 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mwzzv"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.185401 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mwzzv"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.186563 4865 scope.go:117] "RemoveContainer" containerID="58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.210722 4865 scope.go:117] "RemoveContainer" containerID="5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.211568 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b\": container with ID starting with 5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b not found: ID does not exist" containerID="5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.211681 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b"} err="failed to get container status \"5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b\": rpc error: code = NotFound desc = could not find container \"5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b\": container with ID starting with 5a5247c8bc3994f1eb62e9997b012b0bec52b52876bc289c9cd7cd8d40db879b not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.211739 4865 scope.go:117] "RemoveContainer" containerID="7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.212174 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23\": container with ID starting with 7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23 not found: ID does not exist" containerID="7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.212213 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23"} err="failed to get container status \"7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23\": rpc error: code = NotFound desc = could not find container \"7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23\": container with ID starting with 7aeda3c14276640a25dc59f73206800dfbcbc3e50534dde6fcc2b7adf1d8ad23 not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.212246 4865 scope.go:117] "RemoveContainer" containerID="58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.213363 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd\": container with ID starting with 58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd not found: ID does not exist" containerID="58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.213399 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd"} err="failed to get container status \"58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd\": rpc error: code = NotFound desc = could not find container \"58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd\": container with ID starting with 58bcfa63b4e962842a48ec8d6e7b39180be7a7498db7b6e87c309b6fc4f977fd not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.213420 4865 scope.go:117] "RemoveContainer" containerID="86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.227528 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-7xpgm"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.228352 4865 scope.go:117] "RemoveContainer" containerID="8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.242431 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod778153be_8013_460c_8000_e58ba9f45cd9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod401d6c1a_be67_4fb7_97f6_d46e3ba35960.slice/crio-a784fa8d8251162adb391b8b38a65f4d4267e5d4f1110c96bc957680cb44cd3d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod778153be_8013_460c_8000_e58ba9f45cd9.slice/crio-ed9f8d60172867512a36b9d9c0d77505c413c7ac64bbfd5b1ac1e8f9c8c1d271\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod401d6c1a_be67_4fb7_97f6_d46e3ba35960.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod752a7b3b_7850_4bba_b8ce_be070452a538.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod752a7b3b_7850_4bba_b8ce_be070452a538.slice/crio-78b3d1a68fc1f9f013c5828a4250fe27c4058206b9afc07c2690c4d05a0f98e0\": RecentStats: unable to find data in memory cache]" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.252574 4865 scope.go:117] "RemoveContainer" containerID="a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.280914 4865 scope.go:117] "RemoveContainer" containerID="86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.281522 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919\": container with ID starting with 86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919 not found: ID does not exist" containerID="86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.281608 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919"} err="failed to get container status \"86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919\": rpc error: code = NotFound desc = could not find container \"86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919\": container with ID starting with 86d9eb985481c969b7f3255f6ac45ba02282bbb8632bb2e01a249f92f51ba919 not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.281650 4865 scope.go:117] "RemoveContainer" containerID="8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.282017 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf\": container with ID starting with 8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf not found: ID does not exist" containerID="8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.282062 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf"} err="failed to get container status \"8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf\": rpc error: code = NotFound desc = could not find container \"8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf\": container with ID starting with 8c0cb241176adafecb3f27f29527d8d013bce63cff9c66509daa4d1e4d689bdf not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.282103 4865 scope.go:117] "RemoveContainer" containerID="a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.282552 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411\": container with ID starting with a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411 not found: ID does not exist" containerID="a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.282634 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411"} err="failed to get container status \"a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411\": rpc error: code = NotFound desc = could not find container \"a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411\": container with ID starting with a9ea781f2a1c4624229fa7600f7e5cc94c313f58f491ce52963cd99888122411 not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.282718 4865 scope.go:117] "RemoveContainer" containerID="a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.288067 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.301139 4865 scope.go:117] "RemoveContainer" containerID="554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.322875 4865 scope.go:117] "RemoveContainer" containerID="c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.342393 4865 scope.go:117] "RemoveContainer" containerID="a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.343028 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991\": container with ID starting with a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991 not found: ID does not exist" containerID="a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.343077 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991"} err="failed to get container status \"a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991\": rpc error: code = NotFound desc = could not find container \"a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991\": container with ID starting with a9fd61e1ffdfc6da638ce9f6f0695eec82070d0a91ace5eed1fb349f62cd1991 not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.343111 4865 scope.go:117] "RemoveContainer" containerID="554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.343693 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e\": container with ID starting with 554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e not found: ID does not exist" containerID="554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.343756 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e"} err="failed to get container status \"554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e\": rpc error: code = NotFound desc = could not find container \"554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e\": container with ID starting with 554582067d71378a91ae9cc7bb5ce6a913670ce60a969435a56fee72a580ff7e not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.343806 4865 scope.go:117] "RemoveContainer" containerID="c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.344372 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1\": container with ID starting with c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1 not found: ID does not exist" containerID="c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.344401 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1"} err="failed to get container status \"c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1\": rpc error: code = NotFound desc = could not find container \"c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1\": container with ID starting with c49f357a2ee9cfab85abf2348b356e3c44ca803b505c9498668faeb5e5aa4cf1 not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.344418 4865 scope.go:117] "RemoveContainer" containerID="5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.361710 4865 scope.go:117] "RemoveContainer" containerID="5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53" Jan 23 11:57:19 crc kubenswrapper[4865]: E0123 11:57:19.363087 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53\": container with ID starting with 5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53 not found: ID does not exist" containerID="5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.363128 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53"} err="failed to get container status \"5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53\": rpc error: code = NotFound desc = could not find container \"5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53\": container with ID starting with 5b3691f6ec1df075af8faf33ecd7535a0ab76d865949ccfe078025bed16d9a53 not found: ID does not exist" Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.469479 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l76kv"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.474904 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l76kv"] Jan 23 11:57:19 crc kubenswrapper[4865]: I0123 11:57:19.649693 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 11:57:20 crc kubenswrapper[4865]: I0123 11:57:20.100245 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerStarted","Data":"9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945"} Jan 23 11:57:20 crc kubenswrapper[4865]: I0123 11:57:20.100653 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerStarted","Data":"b9070afd8a283df95bbf46126024ffeb6e7ff1f7e0c97767335b872f54555b28"} Jan 23 11:57:20 crc kubenswrapper[4865]: I0123 11:57:20.100673 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:20 crc kubenswrapper[4865]: I0123 11:57:20.112387 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 11:57:20 crc kubenswrapper[4865]: I0123 11:57:20.133375 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" path="/var/lib/kubelet/pods/2d701fdb-266c-4e83-a0b6-099bfd0987a9/volumes" Jan 23 11:57:20 crc kubenswrapper[4865]: I0123 11:57:20.133863 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podStartSLOduration=2.133818119 podStartE2EDuration="2.133818119s" podCreationTimestamp="2026-01-23 11:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:57:20.12409381 +0000 UTC m=+284.293166076" watchObservedRunningTime="2026-01-23 11:57:20.133818119 +0000 UTC m=+284.302890375" Jan 23 11:57:20 crc kubenswrapper[4865]: I0123 11:57:20.134262 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401d6c1a-be67-4fb7-97f6-d46e3ba35960" path="/var/lib/kubelet/pods/401d6c1a-be67-4fb7-97f6-d46e3ba35960/volumes" Jan 23 11:57:20 crc kubenswrapper[4865]: I0123 11:57:20.134871 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" path="/var/lib/kubelet/pods/752a7b3b-7850-4bba-b8ce-be070452a538/volumes" Jan 23 11:57:20 crc kubenswrapper[4865]: I0123 11:57:20.138335 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778153be-8013-460c-8000-e58ba9f45cd9" path="/var/lib/kubelet/pods/778153be-8013-460c-8000-e58ba9f45cd9/volumes" Jan 23 11:57:20 crc kubenswrapper[4865]: I0123 11:57:20.139786 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" path="/var/lib/kubelet/pods/e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9/volumes" Jan 23 11:57:25 crc kubenswrapper[4865]: I0123 11:57:25.700196 4865 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 11:57:25 crc kubenswrapper[4865]: I0123 11:57:25.700471 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://df381d27e123f9a846e1510a8c97673fbe3ddef3562ca908acda6796694ccf46" gracePeriod=5 Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.164253 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.164899 4865 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="df381d27e123f9a846e1510a8c97673fbe3ddef3562ca908acda6796694ccf46" exitCode=137 Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.271875 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.271974 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.406844 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.407300 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.407342 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.407378 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.407505 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.406989 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.407710 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.407725 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.407833 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.408119 4865 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.408156 4865 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.408174 4865 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.408192 4865 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.417862 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 11:57:31 crc kubenswrapper[4865]: I0123 11:57:31.509479 4865 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 11:57:32 crc kubenswrapper[4865]: I0123 11:57:32.125269 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 11:57:32 crc kubenswrapper[4865]: I0123 11:57:32.177666 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 11:57:32 crc kubenswrapper[4865]: I0123 11:57:32.177729 4865 scope.go:117] "RemoveContainer" containerID="df381d27e123f9a846e1510a8c97673fbe3ddef3562ca908acda6796694ccf46" Jan 23 11:57:32 crc kubenswrapper[4865]: I0123 11:57:32.177832 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 11:57:35 crc kubenswrapper[4865]: I0123 11:57:35.983212 4865 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.910419 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tqvjg"] Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911099 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911113 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911121 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911126 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911138 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" containerName="extract-content" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911143 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" containerName="extract-content" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911149 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerName="extract-content" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911155 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerName="extract-content" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911164 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerName="extract-utilities" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911170 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerName="extract-utilities" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911178 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" containerName="extract-utilities" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911184 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" containerName="extract-utilities" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911190 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerName="extract-content" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911196 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerName="extract-content" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911203 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911209 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911217 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778153be-8013-460c-8000-e58ba9f45cd9" containerName="extract-content" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911223 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="778153be-8013-460c-8000-e58ba9f45cd9" containerName="extract-content" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911230 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911236 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911243 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerName="extract-utilities" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911248 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerName="extract-utilities" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911256 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401d6c1a-be67-4fb7-97f6-d46e3ba35960" containerName="marketplace-operator" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911262 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="401d6c1a-be67-4fb7-97f6-d46e3ba35960" containerName="marketplace-operator" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911270 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778153be-8013-460c-8000-e58ba9f45cd9" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911275 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="778153be-8013-460c-8000-e58ba9f45cd9" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: E0123 11:57:49.911284 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="778153be-8013-460c-8000-e58ba9f45cd9" containerName="extract-utilities" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911290 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="778153be-8013-460c-8000-e58ba9f45cd9" containerName="extract-utilities" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911380 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0004a4a-3db5-4281-8d3c-3b9ddbb05ee9" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911393 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d701fdb-266c-4e83-a0b6-099bfd0987a9" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911403 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911410 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="778153be-8013-460c-8000-e58ba9f45cd9" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911420 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="752a7b3b-7850-4bba-b8ce-be070452a538" containerName="registry-server" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.911432 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="401d6c1a-be67-4fb7-97f6-d46e3ba35960" containerName="marketplace-operator" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.912114 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.915033 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.930177 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqvjg"] Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.979891 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef4926-eb81-4d83-a9a1-4b7e9035892f-catalog-content\") pod \"redhat-operators-tqvjg\" (UID: \"67ef4926-eb81-4d83-a9a1-4b7e9035892f\") " pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.980018 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef4926-eb81-4d83-a9a1-4b7e9035892f-utilities\") pod \"redhat-operators-tqvjg\" (UID: \"67ef4926-eb81-4d83-a9a1-4b7e9035892f\") " pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:49 crc kubenswrapper[4865]: I0123 11:57:49.980040 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qlcx\" (UniqueName: \"kubernetes.io/projected/67ef4926-eb81-4d83-a9a1-4b7e9035892f-kube-api-access-8qlcx\") pod \"redhat-operators-tqvjg\" (UID: \"67ef4926-eb81-4d83-a9a1-4b7e9035892f\") " pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.081397 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef4926-eb81-4d83-a9a1-4b7e9035892f-utilities\") pod \"redhat-operators-tqvjg\" (UID: \"67ef4926-eb81-4d83-a9a1-4b7e9035892f\") " pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.081445 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qlcx\" (UniqueName: \"kubernetes.io/projected/67ef4926-eb81-4d83-a9a1-4b7e9035892f-kube-api-access-8qlcx\") pod \"redhat-operators-tqvjg\" (UID: \"67ef4926-eb81-4d83-a9a1-4b7e9035892f\") " pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.081496 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef4926-eb81-4d83-a9a1-4b7e9035892f-catalog-content\") pod \"redhat-operators-tqvjg\" (UID: \"67ef4926-eb81-4d83-a9a1-4b7e9035892f\") " pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.082031 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ef4926-eb81-4d83-a9a1-4b7e9035892f-catalog-content\") pod \"redhat-operators-tqvjg\" (UID: \"67ef4926-eb81-4d83-a9a1-4b7e9035892f\") " pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.083190 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ef4926-eb81-4d83-a9a1-4b7e9035892f-utilities\") pod \"redhat-operators-tqvjg\" (UID: \"67ef4926-eb81-4d83-a9a1-4b7e9035892f\") " pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.107682 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qlcx\" (UniqueName: \"kubernetes.io/projected/67ef4926-eb81-4d83-a9a1-4b7e9035892f-kube-api-access-8qlcx\") pod \"redhat-operators-tqvjg\" (UID: \"67ef4926-eb81-4d83-a9a1-4b7e9035892f\") " pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.127547 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nhd4g"] Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.129165 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.131799 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.139137 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhd4g"] Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.182808 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z74nc\" (UniqueName: \"kubernetes.io/projected/c9ae9da8-9e6d-44ba-82c9-9842698cfa4f-kube-api-access-z74nc\") pod \"redhat-marketplace-nhd4g\" (UID: \"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f\") " pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.182890 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ae9da8-9e6d-44ba-82c9-9842698cfa4f-catalog-content\") pod \"redhat-marketplace-nhd4g\" (UID: \"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f\") " pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.182965 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ae9da8-9e6d-44ba-82c9-9842698cfa4f-utilities\") pod \"redhat-marketplace-nhd4g\" (UID: \"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f\") " pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.231352 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.284344 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ae9da8-9e6d-44ba-82c9-9842698cfa4f-catalog-content\") pod \"redhat-marketplace-nhd4g\" (UID: \"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f\") " pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.284412 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ae9da8-9e6d-44ba-82c9-9842698cfa4f-utilities\") pod \"redhat-marketplace-nhd4g\" (UID: \"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f\") " pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.284481 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z74nc\" (UniqueName: \"kubernetes.io/projected/c9ae9da8-9e6d-44ba-82c9-9842698cfa4f-kube-api-access-z74nc\") pod \"redhat-marketplace-nhd4g\" (UID: \"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f\") " pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.285250 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9ae9da8-9e6d-44ba-82c9-9842698cfa4f-catalog-content\") pod \"redhat-marketplace-nhd4g\" (UID: \"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f\") " pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.285704 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9ae9da8-9e6d-44ba-82c9-9842698cfa4f-utilities\") pod \"redhat-marketplace-nhd4g\" (UID: \"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f\") " pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.312962 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z74nc\" (UniqueName: \"kubernetes.io/projected/c9ae9da8-9e6d-44ba-82c9-9842698cfa4f-kube-api-access-z74nc\") pod \"redhat-marketplace-nhd4g\" (UID: \"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f\") " pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.456710 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.704446 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tqvjg"] Jan 23 11:57:50 crc kubenswrapper[4865]: I0123 11:57:50.907145 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhd4g"] Jan 23 11:57:51 crc kubenswrapper[4865]: I0123 11:57:51.312975 4865 generic.go:334] "Generic (PLEG): container finished" podID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerID="3ab148b7ee22d1375bb98eb3acad5c23af172c95e80305b501ef4a449f0e58b1" exitCode=0 Jan 23 11:57:51 crc kubenswrapper[4865]: I0123 11:57:51.313112 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhd4g" event={"ID":"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f","Type":"ContainerDied","Data":"3ab148b7ee22d1375bb98eb3acad5c23af172c95e80305b501ef4a449f0e58b1"} Jan 23 11:57:51 crc kubenswrapper[4865]: I0123 11:57:51.313159 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhd4g" event={"ID":"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f","Type":"ContainerStarted","Data":"e62fa014927baf72b3dc61e88231304a919ed549f67d33de12259c04e672cbcb"} Jan 23 11:57:51 crc kubenswrapper[4865]: I0123 11:57:51.319459 4865 generic.go:334] "Generic (PLEG): container finished" podID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerID="3ed11877700e2aab238ee11f418169bcb212883509c7509ee5b3e7daf57c0ea2" exitCode=0 Jan 23 11:57:51 crc kubenswrapper[4865]: I0123 11:57:51.319508 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqvjg" event={"ID":"67ef4926-eb81-4d83-a9a1-4b7e9035892f","Type":"ContainerDied","Data":"3ed11877700e2aab238ee11f418169bcb212883509c7509ee5b3e7daf57c0ea2"} Jan 23 11:57:51 crc kubenswrapper[4865]: I0123 11:57:51.319544 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqvjg" event={"ID":"67ef4926-eb81-4d83-a9a1-4b7e9035892f","Type":"ContainerStarted","Data":"45ebf2d35919d72d28dc0e7dd9eaa76b37c17df31baf3e125c3f74dd9bfbd321"} Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.329361 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hh6cp"] Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.331542 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.337684 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.346645 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hh6cp"] Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.415662 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65xbx\" (UniqueName: \"kubernetes.io/projected/14894ab1-ecfc-4a37-a4f3-bc526eb55ce2-kube-api-access-65xbx\") pod \"community-operators-hh6cp\" (UID: \"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2\") " pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.415708 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14894ab1-ecfc-4a37-a4f3-bc526eb55ce2-catalog-content\") pod \"community-operators-hh6cp\" (UID: \"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2\") " pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.415744 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14894ab1-ecfc-4a37-a4f3-bc526eb55ce2-utilities\") pod \"community-operators-hh6cp\" (UID: \"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2\") " pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.514744 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qwxxg"] Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.516345 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.517220 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65xbx\" (UniqueName: \"kubernetes.io/projected/14894ab1-ecfc-4a37-a4f3-bc526eb55ce2-kube-api-access-65xbx\") pod \"community-operators-hh6cp\" (UID: \"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2\") " pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.517365 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14894ab1-ecfc-4a37-a4f3-bc526eb55ce2-catalog-content\") pod \"community-operators-hh6cp\" (UID: \"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2\") " pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.517451 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14894ab1-ecfc-4a37-a4f3-bc526eb55ce2-utilities\") pod \"community-operators-hh6cp\" (UID: \"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2\") " pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.518088 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14894ab1-ecfc-4a37-a4f3-bc526eb55ce2-utilities\") pod \"community-operators-hh6cp\" (UID: \"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2\") " pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.518236 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14894ab1-ecfc-4a37-a4f3-bc526eb55ce2-catalog-content\") pod \"community-operators-hh6cp\" (UID: \"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2\") " pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.526911 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.534755 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qwxxg"] Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.589979 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65xbx\" (UniqueName: \"kubernetes.io/projected/14894ab1-ecfc-4a37-a4f3-bc526eb55ce2-kube-api-access-65xbx\") pod \"community-operators-hh6cp\" (UID: \"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2\") " pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.709832 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.720269 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bcb4671-0b01-435d-aa4b-b9596654bfff-catalog-content\") pod \"certified-operators-qwxxg\" (UID: \"2bcb4671-0b01-435d-aa4b-b9596654bfff\") " pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.720313 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bcb4671-0b01-435d-aa4b-b9596654bfff-utilities\") pod \"certified-operators-qwxxg\" (UID: \"2bcb4671-0b01-435d-aa4b-b9596654bfff\") " pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.720370 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj4n6\" (UniqueName: \"kubernetes.io/projected/2bcb4671-0b01-435d-aa4b-b9596654bfff-kube-api-access-sj4n6\") pod \"certified-operators-qwxxg\" (UID: \"2bcb4671-0b01-435d-aa4b-b9596654bfff\") " pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.821974 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bcb4671-0b01-435d-aa4b-b9596654bfff-catalog-content\") pod \"certified-operators-qwxxg\" (UID: \"2bcb4671-0b01-435d-aa4b-b9596654bfff\") " pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.822541 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bcb4671-0b01-435d-aa4b-b9596654bfff-utilities\") pod \"certified-operators-qwxxg\" (UID: \"2bcb4671-0b01-435d-aa4b-b9596654bfff\") " pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.822474 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bcb4671-0b01-435d-aa4b-b9596654bfff-catalog-content\") pod \"certified-operators-qwxxg\" (UID: \"2bcb4671-0b01-435d-aa4b-b9596654bfff\") " pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.823114 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bcb4671-0b01-435d-aa4b-b9596654bfff-utilities\") pod \"certified-operators-qwxxg\" (UID: \"2bcb4671-0b01-435d-aa4b-b9596654bfff\") " pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.823415 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj4n6\" (UniqueName: \"kubernetes.io/projected/2bcb4671-0b01-435d-aa4b-b9596654bfff-kube-api-access-sj4n6\") pod \"certified-operators-qwxxg\" (UID: \"2bcb4671-0b01-435d-aa4b-b9596654bfff\") " pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:52 crc kubenswrapper[4865]: I0123 11:57:52.839287 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj4n6\" (UniqueName: \"kubernetes.io/projected/2bcb4671-0b01-435d-aa4b-b9596654bfff-kube-api-access-sj4n6\") pod \"certified-operators-qwxxg\" (UID: \"2bcb4671-0b01-435d-aa4b-b9596654bfff\") " pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:53 crc kubenswrapper[4865]: I0123 11:57:53.129744 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:57:53 crc kubenswrapper[4865]: I0123 11:57:53.349465 4865 generic.go:334] "Generic (PLEG): container finished" podID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerID="270f73f024ad913f38fc019a911697f5a80f206eadce12b5236d36f6dbe63c84" exitCode=0 Jan 23 11:57:53 crc kubenswrapper[4865]: I0123 11:57:53.349530 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqvjg" event={"ID":"67ef4926-eb81-4d83-a9a1-4b7e9035892f","Type":"ContainerDied","Data":"270f73f024ad913f38fc019a911697f5a80f206eadce12b5236d36f6dbe63c84"} Jan 23 11:57:53 crc kubenswrapper[4865]: I0123 11:57:53.354498 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhd4g" event={"ID":"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f","Type":"ContainerStarted","Data":"2f2a98048534db7ff9997cccbc84b90a73d037918a2a5064dea46241d2259256"} Jan 23 11:57:53 crc kubenswrapper[4865]: I0123 11:57:53.408848 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hh6cp"] Jan 23 11:57:53 crc kubenswrapper[4865]: W0123 11:57:53.420334 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14894ab1_ecfc_4a37_a4f3_bc526eb55ce2.slice/crio-ecdf01f2ce2da9f79f7ec99c53e16a7001c0c9edca2ad2088ff7ae58528bfec7 WatchSource:0}: Error finding container ecdf01f2ce2da9f79f7ec99c53e16a7001c0c9edca2ad2088ff7ae58528bfec7: Status 404 returned error can't find the container with id ecdf01f2ce2da9f79f7ec99c53e16a7001c0c9edca2ad2088ff7ae58528bfec7 Jan 23 11:57:53 crc kubenswrapper[4865]: W0123 11:57:53.602686 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bcb4671_0b01_435d_aa4b_b9596654bfff.slice/crio-1e42a4ccc6b2f7a410f20d5d5c92a68e6dfb7d4c0a73aadeeb71122f38502260 WatchSource:0}: Error finding container 1e42a4ccc6b2f7a410f20d5d5c92a68e6dfb7d4c0a73aadeeb71122f38502260: Status 404 returned error can't find the container with id 1e42a4ccc6b2f7a410f20d5d5c92a68e6dfb7d4c0a73aadeeb71122f38502260 Jan 23 11:57:53 crc kubenswrapper[4865]: I0123 11:57:53.609928 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qwxxg"] Jan 23 11:57:54 crc kubenswrapper[4865]: I0123 11:57:54.365522 4865 generic.go:334] "Generic (PLEG): container finished" podID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerID="02b2f7f82665650b2cd1a7174a8e96d02dd6788af66af289701848a0ded87b24" exitCode=0 Jan 23 11:57:54 crc kubenswrapper[4865]: I0123 11:57:54.365675 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh6cp" event={"ID":"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2","Type":"ContainerDied","Data":"02b2f7f82665650b2cd1a7174a8e96d02dd6788af66af289701848a0ded87b24"} Jan 23 11:57:54 crc kubenswrapper[4865]: I0123 11:57:54.366112 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh6cp" event={"ID":"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2","Type":"ContainerStarted","Data":"ecdf01f2ce2da9f79f7ec99c53e16a7001c0c9edca2ad2088ff7ae58528bfec7"} Jan 23 11:57:54 crc kubenswrapper[4865]: I0123 11:57:54.370215 4865 generic.go:334] "Generic (PLEG): container finished" podID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerID="0083d3c02d644d632e498010db7fd8878e12f8a149fb6871e755fac564428aa8" exitCode=0 Jan 23 11:57:54 crc kubenswrapper[4865]: I0123 11:57:54.370465 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxxg" event={"ID":"2bcb4671-0b01-435d-aa4b-b9596654bfff","Type":"ContainerDied","Data":"0083d3c02d644d632e498010db7fd8878e12f8a149fb6871e755fac564428aa8"} Jan 23 11:57:54 crc kubenswrapper[4865]: I0123 11:57:54.370508 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxxg" event={"ID":"2bcb4671-0b01-435d-aa4b-b9596654bfff","Type":"ContainerStarted","Data":"1e42a4ccc6b2f7a410f20d5d5c92a68e6dfb7d4c0a73aadeeb71122f38502260"} Jan 23 11:57:54 crc kubenswrapper[4865]: I0123 11:57:54.375955 4865 generic.go:334] "Generic (PLEG): container finished" podID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerID="2f2a98048534db7ff9997cccbc84b90a73d037918a2a5064dea46241d2259256" exitCode=0 Jan 23 11:57:54 crc kubenswrapper[4865]: I0123 11:57:54.376009 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhd4g" event={"ID":"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f","Type":"ContainerDied","Data":"2f2a98048534db7ff9997cccbc84b90a73d037918a2a5064dea46241d2259256"} Jan 23 11:57:55 crc kubenswrapper[4865]: I0123 11:57:55.381059 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqvjg" event={"ID":"67ef4926-eb81-4d83-a9a1-4b7e9035892f","Type":"ContainerStarted","Data":"68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec"} Jan 23 11:57:55 crc kubenswrapper[4865]: I0123 11:57:55.384912 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhd4g" event={"ID":"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f","Type":"ContainerStarted","Data":"d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff"} Jan 23 11:57:55 crc kubenswrapper[4865]: I0123 11:57:55.405471 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tqvjg" podStartSLOduration=3.483116456 podStartE2EDuration="6.40545138s" podCreationTimestamp="2026-01-23 11:57:49 +0000 UTC" firstStartedPulling="2026-01-23 11:57:51.323796638 +0000 UTC m=+315.492868894" lastFinishedPulling="2026-01-23 11:57:54.246131552 +0000 UTC m=+318.415203818" observedRunningTime="2026-01-23 11:57:55.401618301 +0000 UTC m=+319.570690527" watchObservedRunningTime="2026-01-23 11:57:55.40545138 +0000 UTC m=+319.574523606" Jan 23 11:57:55 crc kubenswrapper[4865]: I0123 11:57:55.426563 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nhd4g" podStartSLOduration=1.62831524 podStartE2EDuration="5.4265475s" podCreationTimestamp="2026-01-23 11:57:50 +0000 UTC" firstStartedPulling="2026-01-23 11:57:51.315406803 +0000 UTC m=+315.484479069" lastFinishedPulling="2026-01-23 11:57:55.113639103 +0000 UTC m=+319.282711329" observedRunningTime="2026-01-23 11:57:55.424413986 +0000 UTC m=+319.593486222" watchObservedRunningTime="2026-01-23 11:57:55.4265475 +0000 UTC m=+319.595619726" Jan 23 11:57:56 crc kubenswrapper[4865]: I0123 11:57:56.392711 4865 generic.go:334] "Generic (PLEG): container finished" podID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerID="63495ab539efa0bf803a377e1bf41d38aefdafde91b18ce936e10c072c275b67" exitCode=0 Jan 23 11:57:56 crc kubenswrapper[4865]: I0123 11:57:56.392858 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh6cp" event={"ID":"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2","Type":"ContainerDied","Data":"63495ab539efa0bf803a377e1bf41d38aefdafde91b18ce936e10c072c275b67"} Jan 23 11:57:56 crc kubenswrapper[4865]: I0123 11:57:56.397757 4865 generic.go:334] "Generic (PLEG): container finished" podID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerID="17412d1410a64788b7c62e76e20611bb73bc40c1275629bf94ff419d5b2188c8" exitCode=0 Jan 23 11:57:56 crc kubenswrapper[4865]: I0123 11:57:56.397919 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxxg" event={"ID":"2bcb4671-0b01-435d-aa4b-b9596654bfff","Type":"ContainerDied","Data":"17412d1410a64788b7c62e76e20611bb73bc40c1275629bf94ff419d5b2188c8"} Jan 23 11:57:57 crc kubenswrapper[4865]: I0123 11:57:57.405114 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh6cp" event={"ID":"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2","Type":"ContainerStarted","Data":"578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1"} Jan 23 11:57:58 crc kubenswrapper[4865]: I0123 11:57:58.411868 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxxg" event={"ID":"2bcb4671-0b01-435d-aa4b-b9596654bfff","Type":"ContainerStarted","Data":"4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630"} Jan 23 11:57:58 crc kubenswrapper[4865]: I0123 11:57:58.431065 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hh6cp" podStartSLOduration=3.653258009 podStartE2EDuration="6.43104743s" podCreationTimestamp="2026-01-23 11:57:52 +0000 UTC" firstStartedPulling="2026-01-23 11:57:54.367446961 +0000 UTC m=+318.536519197" lastFinishedPulling="2026-01-23 11:57:57.145236382 +0000 UTC m=+321.314308618" observedRunningTime="2026-01-23 11:57:57.42887374 +0000 UTC m=+321.597945966" watchObservedRunningTime="2026-01-23 11:57:58.43104743 +0000 UTC m=+322.600119656" Jan 23 11:58:00 crc kubenswrapper[4865]: I0123 11:58:00.232200 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:58:00 crc kubenswrapper[4865]: I0123 11:58:00.232519 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:58:00 crc kubenswrapper[4865]: I0123 11:58:00.457824 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:58:00 crc kubenswrapper[4865]: I0123 11:58:00.457895 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:58:00 crc kubenswrapper[4865]: I0123 11:58:00.516349 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:58:00 crc kubenswrapper[4865]: I0123 11:58:00.536379 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qwxxg" podStartSLOduration=5.488325043 podStartE2EDuration="8.536362349s" podCreationTimestamp="2026-01-23 11:57:52 +0000 UTC" firstStartedPulling="2026-01-23 11:57:54.372584842 +0000 UTC m=+318.541657088" lastFinishedPulling="2026-01-23 11:57:57.420622168 +0000 UTC m=+321.589694394" observedRunningTime="2026-01-23 11:57:58.434634642 +0000 UTC m=+322.603706888" watchObservedRunningTime="2026-01-23 11:58:00.536362349 +0000 UTC m=+324.705434575" Jan 23 11:58:01 crc kubenswrapper[4865]: I0123 11:58:01.280629 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output=< Jan 23 11:58:01 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 11:58:01 crc kubenswrapper[4865]: > Jan 23 11:58:01 crc kubenswrapper[4865]: I0123 11:58:01.475723 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 11:58:02 crc kubenswrapper[4865]: I0123 11:58:02.710425 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:58:02 crc kubenswrapper[4865]: I0123 11:58:02.710481 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:58:02 crc kubenswrapper[4865]: I0123 11:58:02.781972 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:58:03 crc kubenswrapper[4865]: I0123 11:58:03.129962 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:58:03 crc kubenswrapper[4865]: I0123 11:58:03.130145 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:58:03 crc kubenswrapper[4865]: I0123 11:58:03.168506 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:58:03 crc kubenswrapper[4865]: I0123 11:58:03.477191 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 11:58:03 crc kubenswrapper[4865]: I0123 11:58:03.481328 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 11:58:10 crc kubenswrapper[4865]: I0123 11:58:10.271263 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:58:10 crc kubenswrapper[4865]: I0123 11:58:10.305281 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 11:58:48 crc kubenswrapper[4865]: I0123 11:58:48.776618 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 11:58:48 crc kubenswrapper[4865]: I0123 11:58:48.777077 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 11:58:51 crc kubenswrapper[4865]: I0123 11:58:51.885662 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sgsqx"] Jan 23 11:58:51 crc kubenswrapper[4865]: I0123 11:58:51.888049 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:51 crc kubenswrapper[4865]: I0123 11:58:51.926263 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sgsqx"] Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.042734 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phdk9\" (UniqueName: \"kubernetes.io/projected/a2830362-05e6-4a49-887e-cf3d25cf65a4-kube-api-access-phdk9\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.042836 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2830362-05e6-4a49-887e-cf3d25cf65a4-registry-certificates\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.042887 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2830362-05e6-4a49-887e-cf3d25cf65a4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.042941 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.043046 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2830362-05e6-4a49-887e-cf3d25cf65a4-trusted-ca\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.043133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2830362-05e6-4a49-887e-cf3d25cf65a4-bound-sa-token\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.043198 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2830362-05e6-4a49-887e-cf3d25cf65a4-registry-tls\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.043350 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2830362-05e6-4a49-887e-cf3d25cf65a4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.065906 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.146795 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2830362-05e6-4a49-887e-cf3d25cf65a4-registry-certificates\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.146961 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2830362-05e6-4a49-887e-cf3d25cf65a4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.147049 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2830362-05e6-4a49-887e-cf3d25cf65a4-trusted-ca\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.147094 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2830362-05e6-4a49-887e-cf3d25cf65a4-bound-sa-token\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.147141 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2830362-05e6-4a49-887e-cf3d25cf65a4-registry-tls\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.147215 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2830362-05e6-4a49-887e-cf3d25cf65a4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.147276 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phdk9\" (UniqueName: \"kubernetes.io/projected/a2830362-05e6-4a49-887e-cf3d25cf65a4-kube-api-access-phdk9\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.148042 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a2830362-05e6-4a49-887e-cf3d25cf65a4-registry-certificates\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.148381 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2830362-05e6-4a49-887e-cf3d25cf65a4-trusted-ca\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.148527 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a2830362-05e6-4a49-887e-cf3d25cf65a4-ca-trust-extracted\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.158257 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a2830362-05e6-4a49-887e-cf3d25cf65a4-registry-tls\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.163049 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a2830362-05e6-4a49-887e-cf3d25cf65a4-installation-pull-secrets\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.167166 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phdk9\" (UniqueName: \"kubernetes.io/projected/a2830362-05e6-4a49-887e-cf3d25cf65a4-kube-api-access-phdk9\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.172059 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a2830362-05e6-4a49-887e-cf3d25cf65a4-bound-sa-token\") pod \"image-registry-66df7c8f76-sgsqx\" (UID: \"a2830362-05e6-4a49-887e-cf3d25cf65a4\") " pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.228562 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.674610 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-sgsqx"] Jan 23 11:58:52 crc kubenswrapper[4865]: W0123 11:58:52.687775 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2830362_05e6_4a49_887e_cf3d25cf65a4.slice/crio-134c5ab87af2e432bcfc12128e33cf3ad3ff3b4a43722777e4faeee06088b86a WatchSource:0}: Error finding container 134c5ab87af2e432bcfc12128e33cf3ad3ff3b4a43722777e4faeee06088b86a: Status 404 returned error can't find the container with id 134c5ab87af2e432bcfc12128e33cf3ad3ff3b4a43722777e4faeee06088b86a Jan 23 11:58:52 crc kubenswrapper[4865]: I0123 11:58:52.727240 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" event={"ID":"a2830362-05e6-4a49-887e-cf3d25cf65a4","Type":"ContainerStarted","Data":"134c5ab87af2e432bcfc12128e33cf3ad3ff3b4a43722777e4faeee06088b86a"} Jan 23 11:58:53 crc kubenswrapper[4865]: I0123 11:58:53.735390 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" event={"ID":"a2830362-05e6-4a49-887e-cf3d25cf65a4","Type":"ContainerStarted","Data":"a82cb642866d620e7e8da4c34411e1a4054fd3eb6ccb5d984ad3c250d3945b97"} Jan 23 11:58:53 crc kubenswrapper[4865]: I0123 11:58:53.735801 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:58:53 crc kubenswrapper[4865]: I0123 11:58:53.762383 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podStartSLOduration=2.7623575369999998 podStartE2EDuration="2.762357537s" podCreationTimestamp="2026-01-23 11:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 11:58:53.761520887 +0000 UTC m=+377.930593143" watchObservedRunningTime="2026-01-23 11:58:53.762357537 +0000 UTC m=+377.931429773" Jan 23 11:59:12 crc kubenswrapper[4865]: I0123 11:59:12.233117 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 11:59:12 crc kubenswrapper[4865]: I0123 11:59:12.317576 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ph28"] Jan 23 11:59:18 crc kubenswrapper[4865]: I0123 11:59:18.776487 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 11:59:18 crc kubenswrapper[4865]: I0123 11:59:18.777002 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.362145 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" podUID="d18c2296-0938-4fef-8c63-8bd9f25c8fc3" containerName="registry" containerID="cri-o://b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d" gracePeriod=30 Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.781780 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.865728 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-tls\") pod \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.865774 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-trusted-ca\") pod \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.865800 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-ca-trust-extracted\") pod \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.865832 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-certificates\") pod \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.865984 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.866012 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-bound-sa-token\") pod \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.866045 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-installation-pull-secrets\") pod \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.866119 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nkt6\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-kube-api-access-4nkt6\") pod \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\" (UID: \"d18c2296-0938-4fef-8c63-8bd9f25c8fc3\") " Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.866630 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d18c2296-0938-4fef-8c63-8bd9f25c8fc3" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.866846 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d18c2296-0938-4fef-8c63-8bd9f25c8fc3" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.873064 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d18c2296-0938-4fef-8c63-8bd9f25c8fc3" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.873224 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d18c2296-0938-4fef-8c63-8bd9f25c8fc3" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.873364 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-kube-api-access-4nkt6" (OuterVolumeSpecName: "kube-api-access-4nkt6") pod "d18c2296-0938-4fef-8c63-8bd9f25c8fc3" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3"). InnerVolumeSpecName "kube-api-access-4nkt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.873443 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d18c2296-0938-4fef-8c63-8bd9f25c8fc3" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.878496 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d18c2296-0938-4fef-8c63-8bd9f25c8fc3" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.888361 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d18c2296-0938-4fef-8c63-8bd9f25c8fc3" (UID: "d18c2296-0938-4fef-8c63-8bd9f25c8fc3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.967496 4865 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.967548 4865 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.967565 4865 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.967580 4865 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.967592 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nkt6\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-kube-api-access-4nkt6\") on node \"crc\" DevicePath \"\"" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.967633 4865 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 11:59:37 crc kubenswrapper[4865]: I0123 11:59:37.967646 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d18c2296-0938-4fef-8c63-8bd9f25c8fc3-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 11:59:38 crc kubenswrapper[4865]: I0123 11:59:38.016870 4865 generic.go:334] "Generic (PLEG): container finished" podID="d18c2296-0938-4fef-8c63-8bd9f25c8fc3" containerID="b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d" exitCode=0 Jan 23 11:59:38 crc kubenswrapper[4865]: I0123 11:59:38.016917 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" event={"ID":"d18c2296-0938-4fef-8c63-8bd9f25c8fc3","Type":"ContainerDied","Data":"b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d"} Jan 23 11:59:38 crc kubenswrapper[4865]: I0123 11:59:38.017251 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" event={"ID":"d18c2296-0938-4fef-8c63-8bd9f25c8fc3","Type":"ContainerDied","Data":"18207ffb42a34f40a01f21906b84f5d4e159640642936e8e906404af67d0b464"} Jan 23 11:59:38 crc kubenswrapper[4865]: I0123 11:59:38.017278 4865 scope.go:117] "RemoveContainer" containerID="b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d" Jan 23 11:59:38 crc kubenswrapper[4865]: I0123 11:59:38.017209 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-6ph28" Jan 23 11:59:38 crc kubenswrapper[4865]: I0123 11:59:38.042104 4865 scope.go:117] "RemoveContainer" containerID="b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d" Jan 23 11:59:38 crc kubenswrapper[4865]: E0123 11:59:38.042736 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d\": container with ID starting with b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d not found: ID does not exist" containerID="b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d" Jan 23 11:59:38 crc kubenswrapper[4865]: I0123 11:59:38.042781 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d"} err="failed to get container status \"b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d\": rpc error: code = NotFound desc = could not find container \"b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d\": container with ID starting with b824265b1526636bd2b3fb9e9fb1b95b6f8c3f8b1e83990f2b172f3f50ce777d not found: ID does not exist" Jan 23 11:59:38 crc kubenswrapper[4865]: I0123 11:59:38.063914 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ph28"] Jan 23 11:59:38 crc kubenswrapper[4865]: I0123 11:59:38.069278 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-6ph28"] Jan 23 11:59:38 crc kubenswrapper[4865]: I0123 11:59:38.124362 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d18c2296-0938-4fef-8c63-8bd9f25c8fc3" path="/var/lib/kubelet/pods/d18c2296-0938-4fef-8c63-8bd9f25c8fc3/volumes" Jan 23 11:59:48 crc kubenswrapper[4865]: I0123 11:59:48.776890 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 11:59:48 crc kubenswrapper[4865]: I0123 11:59:48.777510 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 11:59:48 crc kubenswrapper[4865]: I0123 11:59:48.777584 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 11:59:48 crc kubenswrapper[4865]: I0123 11:59:48.778680 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6b1b0eba2941eeb0825d0b03a6164c492659197949b6b6163a76c28e2d0b61a"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 11:59:48 crc kubenswrapper[4865]: I0123 11:59:48.778807 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://a6b1b0eba2941eeb0825d0b03a6164c492659197949b6b6163a76c28e2d0b61a" gracePeriod=600 Jan 23 11:59:49 crc kubenswrapper[4865]: I0123 11:59:49.090587 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="a6b1b0eba2941eeb0825d0b03a6164c492659197949b6b6163a76c28e2d0b61a" exitCode=0 Jan 23 11:59:49 crc kubenswrapper[4865]: I0123 11:59:49.090679 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"a6b1b0eba2941eeb0825d0b03a6164c492659197949b6b6163a76c28e2d0b61a"} Jan 23 11:59:49 crc kubenswrapper[4865]: I0123 11:59:49.091511 4865 scope.go:117] "RemoveContainer" containerID="4b75a664eae1a7f45959f080d55ccf5b66649885de86573064fc8b9a69cadec9" Jan 23 11:59:50 crc kubenswrapper[4865]: I0123 11:59:50.099482 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"ff98bb889080e3c7f19be17161b36fc32daa7506d4ebfb5788d7c8ff79bcc3ed"} Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.155919 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz"] Jan 23 12:00:00 crc kubenswrapper[4865]: E0123 12:00:00.156647 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d18c2296-0938-4fef-8c63-8bd9f25c8fc3" containerName="registry" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.156662 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d18c2296-0938-4fef-8c63-8bd9f25c8fc3" containerName="registry" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.156771 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d18c2296-0938-4fef-8c63-8bd9f25c8fc3" containerName="registry" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.157240 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.160164 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.160344 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.168812 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz"] Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.292330 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-secret-volume\") pod \"collect-profiles-29486160-wzvsz\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.292376 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d89mh\" (UniqueName: \"kubernetes.io/projected/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-kube-api-access-d89mh\") pod \"collect-profiles-29486160-wzvsz\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.292456 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-config-volume\") pod \"collect-profiles-29486160-wzvsz\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.394259 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-config-volume\") pod \"collect-profiles-29486160-wzvsz\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.394348 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-secret-volume\") pod \"collect-profiles-29486160-wzvsz\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.394377 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d89mh\" (UniqueName: \"kubernetes.io/projected/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-kube-api-access-d89mh\") pod \"collect-profiles-29486160-wzvsz\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.395646 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-config-volume\") pod \"collect-profiles-29486160-wzvsz\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.401196 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-secret-volume\") pod \"collect-profiles-29486160-wzvsz\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.415727 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d89mh\" (UniqueName: \"kubernetes.io/projected/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-kube-api-access-d89mh\") pod \"collect-profiles-29486160-wzvsz\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.476650 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:00 crc kubenswrapper[4865]: I0123 12:00:00.636825 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz"] Jan 23 12:00:01 crc kubenswrapper[4865]: I0123 12:00:01.165237 4865 generic.go:334] "Generic (PLEG): container finished" podID="db41c5d0-dcfc-47eb-a67b-1d4875fafcfd" containerID="54279a3cfbb7024e5c9217d0603a39e6672d78f5bade43871dd3423a2e98c57c" exitCode=0 Jan 23 12:00:01 crc kubenswrapper[4865]: I0123 12:00:01.165302 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" event={"ID":"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd","Type":"ContainerDied","Data":"54279a3cfbb7024e5c9217d0603a39e6672d78f5bade43871dd3423a2e98c57c"} Jan 23 12:00:01 crc kubenswrapper[4865]: I0123 12:00:01.165326 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" event={"ID":"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd","Type":"ContainerStarted","Data":"9a6747b6136a9947eef5435657209de9f5ec43b952fa0ab6f334633b2a3b3304"} Jan 23 12:00:02 crc kubenswrapper[4865]: I0123 12:00:02.393929 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:02 crc kubenswrapper[4865]: I0123 12:00:02.521731 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-config-volume\") pod \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " Jan 23 12:00:02 crc kubenswrapper[4865]: I0123 12:00:02.521839 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-secret-volume\") pod \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " Jan 23 12:00:02 crc kubenswrapper[4865]: I0123 12:00:02.521877 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d89mh\" (UniqueName: \"kubernetes.io/projected/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-kube-api-access-d89mh\") pod \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\" (UID: \"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd\") " Jan 23 12:00:02 crc kubenswrapper[4865]: I0123 12:00:02.523166 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-config-volume" (OuterVolumeSpecName: "config-volume") pod "db41c5d0-dcfc-47eb-a67b-1d4875fafcfd" (UID: "db41c5d0-dcfc-47eb-a67b-1d4875fafcfd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:00:02 crc kubenswrapper[4865]: I0123 12:00:02.527773 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-kube-api-access-d89mh" (OuterVolumeSpecName: "kube-api-access-d89mh") pod "db41c5d0-dcfc-47eb-a67b-1d4875fafcfd" (UID: "db41c5d0-dcfc-47eb-a67b-1d4875fafcfd"). InnerVolumeSpecName "kube-api-access-d89mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:00:02 crc kubenswrapper[4865]: I0123 12:00:02.528824 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "db41c5d0-dcfc-47eb-a67b-1d4875fafcfd" (UID: "db41c5d0-dcfc-47eb-a67b-1d4875fafcfd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:00:02 crc kubenswrapper[4865]: I0123 12:00:02.623009 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 12:00:02 crc kubenswrapper[4865]: I0123 12:00:02.623044 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 12:00:02 crc kubenswrapper[4865]: I0123 12:00:02.623054 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d89mh\" (UniqueName: \"kubernetes.io/projected/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd-kube-api-access-d89mh\") on node \"crc\" DevicePath \"\"" Jan 23 12:00:03 crc kubenswrapper[4865]: I0123 12:00:03.181861 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" event={"ID":"db41c5d0-dcfc-47eb-a67b-1d4875fafcfd","Type":"ContainerDied","Data":"9a6747b6136a9947eef5435657209de9f5ec43b952fa0ab6f334633b2a3b3304"} Jan 23 12:00:03 crc kubenswrapper[4865]: I0123 12:00:03.181924 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz" Jan 23 12:00:03 crc kubenswrapper[4865]: I0123 12:00:03.181932 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a6747b6136a9947eef5435657209de9f5ec43b952fa0ab6f334633b2a3b3304" Jan 23 12:01:52 crc kubenswrapper[4865]: I0123 12:01:52.400082 4865 scope.go:117] "RemoveContainer" containerID="02a885ad1a6563b0a81d1a9175c854c67f4aecd2006602a57a757c392aff28be" Jan 23 12:02:18 crc kubenswrapper[4865]: I0123 12:02:18.776642 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:02:18 crc kubenswrapper[4865]: I0123 12:02:18.777110 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.976332 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt"] Jan 23 12:02:35 crc kubenswrapper[4865]: E0123 12:02:35.977668 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db41c5d0-dcfc-47eb-a67b-1d4875fafcfd" containerName="collect-profiles" Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.977690 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="db41c5d0-dcfc-47eb-a67b-1d4875fafcfd" containerName="collect-profiles" Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.977878 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="db41c5d0-dcfc-47eb-a67b-1d4875fafcfd" containerName="collect-profiles" Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.978493 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.983564 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-mbdcq"] Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.984575 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbdcq" Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.985741 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8ft6r" Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.985784 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.986041 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.986450 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tlds9" Jan 23 12:02:35 crc kubenswrapper[4865]: I0123 12:02:35.997082 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt"] Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.000502 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mbdcq"] Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.063760 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-x972r"] Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.064947 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.070083 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6x58s" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.087052 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-x972r"] Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.128415 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stmwh\" (UniqueName: \"kubernetes.io/projected/a332d40d-1d78-4d9d-b768-b988654c732a-kube-api-access-stmwh\") pod \"cert-manager-858654f9db-mbdcq\" (UID: \"a332d40d-1d78-4d9d-b768-b988654c732a\") " pod="cert-manager/cert-manager-858654f9db-mbdcq" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.128480 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj8xx\" (UniqueName: \"kubernetes.io/projected/15434cef-8cb6-4386-b761-143f1819cac8-kube-api-access-kj8xx\") pod \"cert-manager-cainjector-cf98fcc89-7kqtt\" (UID: \"15434cef-8cb6-4386-b761-143f1819cac8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.128741 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt5n2\" (UniqueName: \"kubernetes.io/projected/1405b73d-070d-495e-a80d-46fc2505ff8c-kube-api-access-wt5n2\") pod \"cert-manager-webhook-687f57d79b-x972r\" (UID: \"1405b73d-070d-495e-a80d-46fc2505ff8c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.229610 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj8xx\" (UniqueName: \"kubernetes.io/projected/15434cef-8cb6-4386-b761-143f1819cac8-kube-api-access-kj8xx\") pod \"cert-manager-cainjector-cf98fcc89-7kqtt\" (UID: \"15434cef-8cb6-4386-b761-143f1819cac8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.229707 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt5n2\" (UniqueName: \"kubernetes.io/projected/1405b73d-070d-495e-a80d-46fc2505ff8c-kube-api-access-wt5n2\") pod \"cert-manager-webhook-687f57d79b-x972r\" (UID: \"1405b73d-070d-495e-a80d-46fc2505ff8c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.229765 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stmwh\" (UniqueName: \"kubernetes.io/projected/a332d40d-1d78-4d9d-b768-b988654c732a-kube-api-access-stmwh\") pod \"cert-manager-858654f9db-mbdcq\" (UID: \"a332d40d-1d78-4d9d-b768-b988654c732a\") " pod="cert-manager/cert-manager-858654f9db-mbdcq" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.248171 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.257927 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.277081 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj8xx\" (UniqueName: \"kubernetes.io/projected/15434cef-8cb6-4386-b761-143f1819cac8-kube-api-access-kj8xx\") pod \"cert-manager-cainjector-cf98fcc89-7kqtt\" (UID: \"15434cef-8cb6-4386-b761-143f1819cac8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.277208 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt5n2\" (UniqueName: \"kubernetes.io/projected/1405b73d-070d-495e-a80d-46fc2505ff8c-kube-api-access-wt5n2\") pod \"cert-manager-webhook-687f57d79b-x972r\" (UID: \"1405b73d-070d-495e-a80d-46fc2505ff8c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.279723 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stmwh\" (UniqueName: \"kubernetes.io/projected/a332d40d-1d78-4d9d-b768-b988654c732a-kube-api-access-stmwh\") pod \"cert-manager-858654f9db-mbdcq\" (UID: \"a332d40d-1d78-4d9d-b768-b988654c732a\") " pod="cert-manager/cert-manager-858654f9db-mbdcq" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.309238 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8ft6r" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.317027 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.321174 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tlds9" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.329680 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-mbdcq" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.380593 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.637641 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-mbdcq"] Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.648173 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.687499 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt"] Jan 23 12:02:36 crc kubenswrapper[4865]: W0123 12:02:36.689141 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15434cef_8cb6_4386_b761_143f1819cac8.slice/crio-c646879040d72d9f0b2ad16494db1fa78b7c8f81015cb245cb5634b52cc1207c WatchSource:0}: Error finding container c646879040d72d9f0b2ad16494db1fa78b7c8f81015cb245cb5634b52cc1207c: Status 404 returned error can't find the container with id c646879040d72d9f0b2ad16494db1fa78b7c8f81015cb245cb5634b52cc1207c Jan 23 12:02:36 crc kubenswrapper[4865]: I0123 12:02:36.732023 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-x972r"] Jan 23 12:02:36 crc kubenswrapper[4865]: W0123 12:02:36.734750 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1405b73d_070d_495e_a80d_46fc2505ff8c.slice/crio-9dbc14c3caa84aaabb972ae00b55292a761e044715de738f5eddcffcaac6bbb0 WatchSource:0}: Error finding container 9dbc14c3caa84aaabb972ae00b55292a761e044715de738f5eddcffcaac6bbb0: Status 404 returned error can't find the container with id 9dbc14c3caa84aaabb972ae00b55292a761e044715de738f5eddcffcaac6bbb0 Jan 23 12:02:37 crc kubenswrapper[4865]: I0123 12:02:37.194227 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" event={"ID":"15434cef-8cb6-4386-b761-143f1819cac8","Type":"ContainerStarted","Data":"c646879040d72d9f0b2ad16494db1fa78b7c8f81015cb245cb5634b52cc1207c"} Jan 23 12:02:37 crc kubenswrapper[4865]: I0123 12:02:37.196584 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mbdcq" event={"ID":"a332d40d-1d78-4d9d-b768-b988654c732a","Type":"ContainerStarted","Data":"f703cd813775b5ddfb6a0f66a3d8594789549cb84d88727fcaa38749465cfd19"} Jan 23 12:02:37 crc kubenswrapper[4865]: I0123 12:02:37.206346 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" event={"ID":"1405b73d-070d-495e-a80d-46fc2505ff8c","Type":"ContainerStarted","Data":"9dbc14c3caa84aaabb972ae00b55292a761e044715de738f5eddcffcaac6bbb0"} Jan 23 12:02:42 crc kubenswrapper[4865]: I0123 12:02:42.240513 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" event={"ID":"1405b73d-070d-495e-a80d-46fc2505ff8c","Type":"ContainerStarted","Data":"32adfc53532b9a4da0fc696be93013a0d5ed9468ca28f5ee3ea470e50ce0b017"} Jan 23 12:02:42 crc kubenswrapper[4865]: I0123 12:02:42.241391 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:02:42 crc kubenswrapper[4865]: I0123 12:02:42.243196 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" event={"ID":"15434cef-8cb6-4386-b761-143f1819cac8","Type":"ContainerStarted","Data":"2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4"} Jan 23 12:02:42 crc kubenswrapper[4865]: I0123 12:02:42.245668 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mbdcq" event={"ID":"a332d40d-1d78-4d9d-b768-b988654c732a","Type":"ContainerStarted","Data":"8cf4698cdb0957f903144e968b184805d998fc4db6eb44b4ecb51ac27de605f1"} Jan 23 12:02:42 crc kubenswrapper[4865]: I0123 12:02:42.261374 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podStartSLOduration=1.198115624 podStartE2EDuration="6.261350577s" podCreationTimestamp="2026-01-23 12:02:36 +0000 UTC" firstStartedPulling="2026-01-23 12:02:36.738030985 +0000 UTC m=+600.907103211" lastFinishedPulling="2026-01-23 12:02:41.801265938 +0000 UTC m=+605.970338164" observedRunningTime="2026-01-23 12:02:42.259590914 +0000 UTC m=+606.428663140" watchObservedRunningTime="2026-01-23 12:02:42.261350577 +0000 UTC m=+606.430422793" Jan 23 12:02:42 crc kubenswrapper[4865]: I0123 12:02:42.276489 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" podStartSLOduration=2.179812757 podStartE2EDuration="7.276462248s" podCreationTimestamp="2026-01-23 12:02:35 +0000 UTC" firstStartedPulling="2026-01-23 12:02:36.697073021 +0000 UTC m=+600.866145247" lastFinishedPulling="2026-01-23 12:02:41.793722502 +0000 UTC m=+605.962794738" observedRunningTime="2026-01-23 12:02:42.275186346 +0000 UTC m=+606.444258572" watchObservedRunningTime="2026-01-23 12:02:42.276462248 +0000 UTC m=+606.445534474" Jan 23 12:02:42 crc kubenswrapper[4865]: I0123 12:02:42.306758 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-mbdcq" podStartSLOduration=2.1938331189999998 podStartE2EDuration="7.30673597s" podCreationTimestamp="2026-01-23 12:02:35 +0000 UTC" firstStartedPulling="2026-01-23 12:02:36.647945176 +0000 UTC m=+600.817017402" lastFinishedPulling="2026-01-23 12:02:41.760848027 +0000 UTC m=+605.929920253" observedRunningTime="2026-01-23 12:02:42.305337216 +0000 UTC m=+606.474409442" watchObservedRunningTime="2026-01-23 12:02:42.30673597 +0000 UTC m=+606.475808196" Jan 23 12:02:45 crc kubenswrapper[4865]: I0123 12:02:45.425783 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68shs"] Jan 23 12:02:45 crc kubenswrapper[4865]: I0123 12:02:45.426846 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovn-controller" containerID="cri-o://ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570" gracePeriod=30 Jan 23 12:02:45 crc kubenswrapper[4865]: I0123 12:02:45.426950 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="northd" containerID="cri-o://4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d" gracePeriod=30 Jan 23 12:02:45 crc kubenswrapper[4865]: I0123 12:02:45.426978 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728" gracePeriod=30 Jan 23 12:02:45 crc kubenswrapper[4865]: I0123 12:02:45.427034 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="kube-rbac-proxy-node" containerID="cri-o://18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6" gracePeriod=30 Jan 23 12:02:45 crc kubenswrapper[4865]: I0123 12:02:45.427109 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovn-acl-logging" containerID="cri-o://45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a" gracePeriod=30 Jan 23 12:02:45 crc kubenswrapper[4865]: I0123 12:02:45.427342 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="sbdb" containerID="cri-o://3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c" gracePeriod=30 Jan 23 12:02:45 crc kubenswrapper[4865]: I0123 12:02:45.427516 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="nbdb" containerID="cri-o://982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51" gracePeriod=30 Jan 23 12:02:45 crc kubenswrapper[4865]: I0123 12:02:45.485255 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" containerID="cri-o://7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd" gracePeriod=30 Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.150230 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/3.log" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.154629 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovn-acl-logging/0.log" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.155227 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovn-controller/0.log" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.155936 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.224937 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hvjnd"] Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225233 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="nbdb" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225250 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="nbdb" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225264 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="kube-rbac-proxy-node" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225276 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="kube-rbac-proxy-node" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225292 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225302 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225311 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225319 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225327 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovn-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225336 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovn-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225346 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovn-acl-logging" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225354 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovn-acl-logging" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225362 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225370 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225396 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225405 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225425 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="northd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225435 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="northd" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225445 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="sbdb" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225454 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="sbdb" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225465 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="kubecfg-setup" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225473 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="kubecfg-setup" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225616 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225627 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225635 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225648 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225659 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="sbdb" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225669 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225681 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovn-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225689 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="northd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225699 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225708 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovn-acl-logging" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225718 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="kube-rbac-proxy-node" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225729 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="nbdb" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225933 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225972 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.225983 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.225991 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerName="ovnkube-controller" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.228199 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.279262 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovnkube-controller/3.log" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.282696 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovn-acl-logging/0.log" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.283672 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-68shs_4ea3549b-3898-4d82-8240-2e062b4a6046/ovn-controller/0.log" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284641 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd" exitCode=0 Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284678 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c" exitCode=0 Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284689 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51" exitCode=0 Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284701 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d" exitCode=0 Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284713 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728" exitCode=0 Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284721 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6" exitCode=0 Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284729 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a" exitCode=143 Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284738 4865 generic.go:334] "Generic (PLEG): container finished" podID="4ea3549b-3898-4d82-8240-2e062b4a6046" containerID="ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570" exitCode=143 Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284812 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284848 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284862 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284870 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284887 4865 scope.go:117] "RemoveContainer" containerID="7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.284872 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285065 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285079 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285099 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285112 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285119 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285125 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285133 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285139 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285147 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285157 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285164 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285174 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285185 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285193 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285201 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285210 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285217 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285224 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285233 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285238 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285244 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285249 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285257 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285266 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285273 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285279 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285284 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285291 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285298 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285303 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285308 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285315 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285322 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285329 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-68shs" event={"ID":"4ea3549b-3898-4d82-8240-2e062b4a6046","Type":"ContainerDied","Data":"876b7cec3fb24d1cf86c2c30e77fe6e0213e9433560b3b5f0239229f73236091"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285338 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285344 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285351 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285358 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285364 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285370 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285376 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285381 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285387 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.285392 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.287913 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cb8rs_b3d06336-44ac-4c17-899b-28cbfe2ee64d/kube-multus/2.log" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.288911 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cb8rs_b3d06336-44ac-4c17-899b-28cbfe2ee64d/kube-multus/1.log" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.288988 4865 generic.go:334] "Generic (PLEG): container finished" podID="b3d06336-44ac-4c17-899b-28cbfe2ee64d" containerID="00a6f6797638587efdf93cfa4a2c2f18b2ee85067681c5118db013881e63b8a4" exitCode=2 Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.289041 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cb8rs" event={"ID":"b3d06336-44ac-4c17-899b-28cbfe2ee64d","Type":"ContainerDied","Data":"00a6f6797638587efdf93cfa4a2c2f18b2ee85067681c5118db013881e63b8a4"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.289086 4865 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06"} Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.289648 4865 scope.go:117] "RemoveContainer" containerID="00a6f6797638587efdf93cfa4a2c2f18b2ee85067681c5118db013881e63b8a4" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.289924 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cb8rs_openshift-multus(b3d06336-44ac-4c17-899b-28cbfe2ee64d)\"" pod="openshift-multus/multus-cb8rs" podUID="b3d06336-44ac-4c17-899b-28cbfe2ee64d" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.304878 4865 scope.go:117] "RemoveContainer" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.308513 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-config\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.308999 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-var-lib-openvswitch\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309018 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309048 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-etc-openvswitch\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309072 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309118 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-bin\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309153 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309188 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-kubelet\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309190 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309227 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl88l\" (UniqueName: \"kubernetes.io/projected/4ea3549b-3898-4d82-8240-2e062b4a6046-kube-api-access-wl88l\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309220 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309258 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-systemd\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309286 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-systemd-units\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309314 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-node-log\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309347 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-ovn-kubernetes\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309371 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ea3549b-3898-4d82-8240-2e062b4a6046-ovn-node-metrics-cert\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309402 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-env-overrides\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309483 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-netd\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309507 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-log-socket\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309530 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-netns\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309558 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-slash\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309592 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-ovn\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309628 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-script-lib\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309650 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-openvswitch\") pod \"4ea3549b-3898-4d82-8240-2e062b4a6046\" (UID: \"4ea3549b-3898-4d82-8240-2e062b4a6046\") " Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309881 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-run-netns\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309916 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-run-ovn\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309954 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4afdf9c5-20b1-4482-a599-36000ac58add-env-overrides\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309995 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4afdf9c5-20b1-4482-a599-36000ac58add-ovn-node-metrics-cert\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310026 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-kubelet\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310042 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9wl\" (UniqueName: \"kubernetes.io/projected/4afdf9c5-20b1-4482-a599-36000ac58add-kube-api-access-vn9wl\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310064 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-var-lib-openvswitch\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310089 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-slash\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310106 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4afdf9c5-20b1-4482-a599-36000ac58add-ovnkube-config\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-run-openvswitch\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310149 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-run-systemd\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310166 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-cni-netd\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310186 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-etc-openvswitch\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310202 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-node-log\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310243 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-run-ovn-kubernetes\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310262 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310284 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-systemd-units\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310306 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-log-socket\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310331 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4afdf9c5-20b1-4482-a599-36000ac58add-ovnkube-script-lib\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310360 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-cni-bin\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310404 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310418 4865 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310431 4865 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310442 4865 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309259 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.309277 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.310981 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.311010 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-node-log" (OuterVolumeSpecName: "node-log") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.311034 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.311178 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.311431 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.311503 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.311537 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-log-socket" (OuterVolumeSpecName: "log-socket") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.311756 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.311821 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.314912 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-slash" (OuterVolumeSpecName: "host-slash") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.316390 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.318827 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea3549b-3898-4d82-8240-2e062b4a6046-kube-api-access-wl88l" (OuterVolumeSpecName: "kube-api-access-wl88l") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "kube-api-access-wl88l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.327011 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea3549b-3898-4d82-8240-2e062b4a6046-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.332637 4865 scope.go:117] "RemoveContainer" containerID="3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.332645 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4ea3549b-3898-4d82-8240-2e062b4a6046" (UID: "4ea3549b-3898-4d82-8240-2e062b4a6046"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.348293 4865 scope.go:117] "RemoveContainer" containerID="982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.365744 4865 scope.go:117] "RemoveContainer" containerID="4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.380560 4865 scope.go:117] "RemoveContainer" containerID="7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.386012 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.395195 4865 scope.go:117] "RemoveContainer" containerID="18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411272 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-etc-openvswitch\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411314 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-node-log\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411340 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-run-ovn-kubernetes\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411358 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411375 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-systemd-units\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411395 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-log-socket\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411414 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4afdf9c5-20b1-4482-a599-36000ac58add-ovnkube-script-lib\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411429 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-cni-bin\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411447 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-run-netns\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411466 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-run-ovn\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411488 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4afdf9c5-20b1-4482-a599-36000ac58add-env-overrides\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411510 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4afdf9c5-20b1-4482-a599-36000ac58add-ovn-node-metrics-cert\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411529 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-kubelet\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411545 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn9wl\" (UniqueName: \"kubernetes.io/projected/4afdf9c5-20b1-4482-a599-36000ac58add-kube-api-access-vn9wl\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411572 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-var-lib-openvswitch\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411591 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-slash\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411622 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4afdf9c5-20b1-4482-a599-36000ac58add-ovnkube-config\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411639 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-cni-bin\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411645 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-run-openvswitch\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411666 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-run-openvswitch\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411681 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-run-systemd\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411694 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-run-netns\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411701 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-cni-netd\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411718 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-run-ovn\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411746 4865 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411757 4865 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411767 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl88l\" (UniqueName: \"kubernetes.io/projected/4ea3549b-3898-4d82-8240-2e062b4a6046-kube-api-access-wl88l\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411776 4865 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411784 4865 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411816 4865 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411826 4865 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411834 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4ea3549b-3898-4d82-8240-2e062b4a6046-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411842 4865 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411854 4865 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411862 4865 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411872 4865 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411880 4865 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411888 4865 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411906 4865 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4ea3549b-3898-4d82-8240-2e062b4a6046-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411916 4865 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4ea3549b-3898-4d82-8240-2e062b4a6046-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411939 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-etc-openvswitch\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411960 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-node-log\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.411986 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-run-ovn-kubernetes\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.412006 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.412042 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-systemd-units\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.412064 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-log-socket\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.412275 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4afdf9c5-20b1-4482-a599-36000ac58add-env-overrides\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.412711 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4afdf9c5-20b1-4482-a599-36000ac58add-ovnkube-script-lib\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.412745 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-run-systemd\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.412766 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-cni-netd\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.412789 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-kubelet\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.412812 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-var-lib-openvswitch\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.412883 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4afdf9c5-20b1-4482-a599-36000ac58add-host-slash\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.414045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4afdf9c5-20b1-4482-a599-36000ac58add-ovnkube-config\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.417942 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4afdf9c5-20b1-4482-a599-36000ac58add-ovn-node-metrics-cert\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.430592 4865 scope.go:117] "RemoveContainer" containerID="45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.434496 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn9wl\" (UniqueName: \"kubernetes.io/projected/4afdf9c5-20b1-4482-a599-36000ac58add-kube-api-access-vn9wl\") pod \"ovnkube-node-hvjnd\" (UID: \"4afdf9c5-20b1-4482-a599-36000ac58add\") " pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.445774 4865 scope.go:117] "RemoveContainer" containerID="ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.462289 4865 scope.go:117] "RemoveContainer" containerID="9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.479400 4865 scope.go:117] "RemoveContainer" containerID="7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.479931 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": container with ID starting with 7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd not found: ID does not exist" containerID="7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.479980 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd"} err="failed to get container status \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": rpc error: code = NotFound desc = could not find container \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": container with ID starting with 7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.480028 4865 scope.go:117] "RemoveContainer" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.480532 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\": container with ID starting with ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665 not found: ID does not exist" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.480561 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665"} err="failed to get container status \"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\": rpc error: code = NotFound desc = could not find container \"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\": container with ID starting with ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.480580 4865 scope.go:117] "RemoveContainer" containerID="3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.481245 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\": container with ID starting with 3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c not found: ID does not exist" containerID="3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.481273 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c"} err="failed to get container status \"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\": rpc error: code = NotFound desc = could not find container \"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\": container with ID starting with 3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.481309 4865 scope.go:117] "RemoveContainer" containerID="982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.481660 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\": container with ID starting with 982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51 not found: ID does not exist" containerID="982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.481692 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51"} err="failed to get container status \"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\": rpc error: code = NotFound desc = could not find container \"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\": container with ID starting with 982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.481711 4865 scope.go:117] "RemoveContainer" containerID="4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.482050 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\": container with ID starting with 4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d not found: ID does not exist" containerID="4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.482102 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d"} err="failed to get container status \"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\": rpc error: code = NotFound desc = could not find container \"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\": container with ID starting with 4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.482140 4865 scope.go:117] "RemoveContainer" containerID="7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.482459 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\": container with ID starting with 7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728 not found: ID does not exist" containerID="7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.482489 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728"} err="failed to get container status \"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\": rpc error: code = NotFound desc = could not find container \"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\": container with ID starting with 7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.482509 4865 scope.go:117] "RemoveContainer" containerID="18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.483005 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\": container with ID starting with 18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6 not found: ID does not exist" containerID="18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.483031 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6"} err="failed to get container status \"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\": rpc error: code = NotFound desc = could not find container \"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\": container with ID starting with 18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.483046 4865 scope.go:117] "RemoveContainer" containerID="45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.483637 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\": container with ID starting with 45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a not found: ID does not exist" containerID="45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.483791 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a"} err="failed to get container status \"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\": rpc error: code = NotFound desc = could not find container \"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\": container with ID starting with 45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.483813 4865 scope.go:117] "RemoveContainer" containerID="ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.484449 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\": container with ID starting with ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570 not found: ID does not exist" containerID="ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.484475 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570"} err="failed to get container status \"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\": rpc error: code = NotFound desc = could not find container \"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\": container with ID starting with ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.484489 4865 scope.go:117] "RemoveContainer" containerID="9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006" Jan 23 12:02:46 crc kubenswrapper[4865]: E0123 12:02:46.485059 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\": container with ID starting with 9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006 not found: ID does not exist" containerID="9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.485079 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006"} err="failed to get container status \"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\": rpc error: code = NotFound desc = could not find container \"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\": container with ID starting with 9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.485094 4865 scope.go:117] "RemoveContainer" containerID="7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.485367 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd"} err="failed to get container status \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": rpc error: code = NotFound desc = could not find container \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": container with ID starting with 7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.485483 4865 scope.go:117] "RemoveContainer" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.485814 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665"} err="failed to get container status \"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\": rpc error: code = NotFound desc = could not find container \"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\": container with ID starting with ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.485833 4865 scope.go:117] "RemoveContainer" containerID="3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.486077 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c"} err="failed to get container status \"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\": rpc error: code = NotFound desc = could not find container \"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\": container with ID starting with 3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.486105 4865 scope.go:117] "RemoveContainer" containerID="982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.486424 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51"} err="failed to get container status \"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\": rpc error: code = NotFound desc = could not find container \"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\": container with ID starting with 982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.486445 4865 scope.go:117] "RemoveContainer" containerID="4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.486771 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d"} err="failed to get container status \"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\": rpc error: code = NotFound desc = could not find container \"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\": container with ID starting with 4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.486795 4865 scope.go:117] "RemoveContainer" containerID="7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.487015 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728"} err="failed to get container status \"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\": rpc error: code = NotFound desc = could not find container \"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\": container with ID starting with 7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.487034 4865 scope.go:117] "RemoveContainer" containerID="18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.487259 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6"} err="failed to get container status \"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\": rpc error: code = NotFound desc = could not find container \"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\": container with ID starting with 18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.487286 4865 scope.go:117] "RemoveContainer" containerID="45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.487566 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a"} err="failed to get container status \"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\": rpc error: code = NotFound desc = could not find container \"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\": container with ID starting with 45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.487584 4865 scope.go:117] "RemoveContainer" containerID="ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.487861 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570"} err="failed to get container status \"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\": rpc error: code = NotFound desc = could not find container \"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\": container with ID starting with ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.487893 4865 scope.go:117] "RemoveContainer" containerID="9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.488181 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006"} err="failed to get container status \"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\": rpc error: code = NotFound desc = could not find container \"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\": container with ID starting with 9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.488201 4865 scope.go:117] "RemoveContainer" containerID="7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.488435 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd"} err="failed to get container status \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": rpc error: code = NotFound desc = could not find container \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": container with ID starting with 7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.488471 4865 scope.go:117] "RemoveContainer" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.488730 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665"} err="failed to get container status \"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\": rpc error: code = NotFound desc = could not find container \"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\": container with ID starting with ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.488750 4865 scope.go:117] "RemoveContainer" containerID="3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.489023 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c"} err="failed to get container status \"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\": rpc error: code = NotFound desc = could not find container \"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\": container with ID starting with 3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.489056 4865 scope.go:117] "RemoveContainer" containerID="982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.489302 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51"} err="failed to get container status \"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\": rpc error: code = NotFound desc = could not find container \"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\": container with ID starting with 982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.489324 4865 scope.go:117] "RemoveContainer" containerID="4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.489585 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d"} err="failed to get container status \"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\": rpc error: code = NotFound desc = could not find container \"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\": container with ID starting with 4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.489619 4865 scope.go:117] "RemoveContainer" containerID="7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.492757 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728"} err="failed to get container status \"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\": rpc error: code = NotFound desc = could not find container \"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\": container with ID starting with 7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.492829 4865 scope.go:117] "RemoveContainer" containerID="18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.493256 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6"} err="failed to get container status \"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\": rpc error: code = NotFound desc = could not find container \"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\": container with ID starting with 18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.493283 4865 scope.go:117] "RemoveContainer" containerID="45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.493642 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a"} err="failed to get container status \"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\": rpc error: code = NotFound desc = could not find container \"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\": container with ID starting with 45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.493705 4865 scope.go:117] "RemoveContainer" containerID="ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.494120 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570"} err="failed to get container status \"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\": rpc error: code = NotFound desc = could not find container \"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\": container with ID starting with ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.494138 4865 scope.go:117] "RemoveContainer" containerID="9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.494440 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006"} err="failed to get container status \"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\": rpc error: code = NotFound desc = could not find container \"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\": container with ID starting with 9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.494472 4865 scope.go:117] "RemoveContainer" containerID="7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.494822 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd"} err="failed to get container status \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": rpc error: code = NotFound desc = could not find container \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": container with ID starting with 7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.494863 4865 scope.go:117] "RemoveContainer" containerID="ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.495136 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665"} err="failed to get container status \"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\": rpc error: code = NotFound desc = could not find container \"ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665\": container with ID starting with ddd408bb6eeafdabb6b35e6e35b03fc91b6823fc980c5dbe0d9872f579822665 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.495159 4865 scope.go:117] "RemoveContainer" containerID="3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.495399 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c"} err="failed to get container status \"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\": rpc error: code = NotFound desc = could not find container \"3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c\": container with ID starting with 3ea4fa9a056ef121b68f5fc154ae64099a85ba76f83665d72fe1c9600444343c not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.495413 4865 scope.go:117] "RemoveContainer" containerID="982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.495854 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51"} err="failed to get container status \"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\": rpc error: code = NotFound desc = could not find container \"982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51\": container with ID starting with 982844e86710692ba525c2a3e12217bfcefec41293cc7eaa8aa52d5f20600d51 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.495869 4865 scope.go:117] "RemoveContainer" containerID="4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.496158 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d"} err="failed to get container status \"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\": rpc error: code = NotFound desc = could not find container \"4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d\": container with ID starting with 4d8ed5ccf6000485e9b77632e3b9a1ead52f6b5b2fa220531d559fcdc337172d not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.496182 4865 scope.go:117] "RemoveContainer" containerID="7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.496687 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728"} err="failed to get container status \"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\": rpc error: code = NotFound desc = could not find container \"7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728\": container with ID starting with 7e07ded0c3d393341148a0fd376803f2fc98dfb351ca0de6acd510c16802c728 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.496707 4865 scope.go:117] "RemoveContainer" containerID="18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.497276 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6"} err="failed to get container status \"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\": rpc error: code = NotFound desc = could not find container \"18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6\": container with ID starting with 18ed39c2c7ad9b0a6a5a2e90b72722384d909331285dfcbc0c389eb6a52a4ac6 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.497298 4865 scope.go:117] "RemoveContainer" containerID="45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.497625 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a"} err="failed to get container status \"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\": rpc error: code = NotFound desc = could not find container \"45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a\": container with ID starting with 45e1c7be75b9b7272a02226bd07a5aa59c6cd905d28a751a2c714863e720749a not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.497663 4865 scope.go:117] "RemoveContainer" containerID="ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.506868 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570"} err="failed to get container status \"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\": rpc error: code = NotFound desc = could not find container \"ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570\": container with ID starting with ed32cf431dba16750deeb851fba527082fb5e7f2d9c86bec8d4f726b09063570 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.506930 4865 scope.go:117] "RemoveContainer" containerID="9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.508031 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006"} err="failed to get container status \"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\": rpc error: code = NotFound desc = could not find container \"9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006\": container with ID starting with 9eb16fd2d72ea149292d065650dae3b9b3a20c849782d3f1490c8076e3746006 not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.508086 4865 scope.go:117] "RemoveContainer" containerID="7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.508503 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd"} err="failed to get container status \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": rpc error: code = NotFound desc = could not find container \"7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd\": container with ID starting with 7f42c5096291b75b4cf12aed40935adb4d5534416471dc1cdc28a9d2231b08bd not found: ID does not exist" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.542618 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.662189 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68shs"] Jan 23 12:02:46 crc kubenswrapper[4865]: I0123 12:02:46.666839 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-68shs"] Jan 23 12:02:47 crc kubenswrapper[4865]: I0123 12:02:47.302387 4865 generic.go:334] "Generic (PLEG): container finished" podID="4afdf9c5-20b1-4482-a599-36000ac58add" containerID="e6077dcdb5cbc0560de9a12a264f58c8448adb7d2066dc1f9d886b35b89621b9" exitCode=0 Jan 23 12:02:47 crc kubenswrapper[4865]: I0123 12:02:47.302463 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" event={"ID":"4afdf9c5-20b1-4482-a599-36000ac58add","Type":"ContainerDied","Data":"e6077dcdb5cbc0560de9a12a264f58c8448adb7d2066dc1f9d886b35b89621b9"} Jan 23 12:02:47 crc kubenswrapper[4865]: I0123 12:02:47.302704 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" event={"ID":"4afdf9c5-20b1-4482-a599-36000ac58add","Type":"ContainerStarted","Data":"a57a70b209beef491b83b14e63f56f35478205c5ba80c3298f3f957193c382ad"} Jan 23 12:02:48 crc kubenswrapper[4865]: I0123 12:02:48.124034 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea3549b-3898-4d82-8240-2e062b4a6046" path="/var/lib/kubelet/pods/4ea3549b-3898-4d82-8240-2e062b4a6046/volumes" Jan 23 12:02:48 crc kubenswrapper[4865]: I0123 12:02:48.311820 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" event={"ID":"4afdf9c5-20b1-4482-a599-36000ac58add","Type":"ContainerStarted","Data":"b769fe5cfb1fc7d5b5ee68ace62fecdb48d6c5352537c2547bfefe7eccf1bdf1"} Jan 23 12:02:48 crc kubenswrapper[4865]: I0123 12:02:48.776475 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:02:48 crc kubenswrapper[4865]: I0123 12:02:48.777224 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:02:49 crc kubenswrapper[4865]: I0123 12:02:49.324119 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" event={"ID":"4afdf9c5-20b1-4482-a599-36000ac58add","Type":"ContainerStarted","Data":"cf1870279bd7d4f7b9d757a7bdbcdb14fc2ce0107e94025fc74d9ee31ccb9bfc"} Jan 23 12:02:49 crc kubenswrapper[4865]: I0123 12:02:49.324659 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" event={"ID":"4afdf9c5-20b1-4482-a599-36000ac58add","Type":"ContainerStarted","Data":"0a1a6177cefaa1bf2296e0caa39712ceefdec5f1cfd3c6c94d5d9b383f2d6cc2"} Jan 23 12:02:49 crc kubenswrapper[4865]: I0123 12:02:49.324684 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" event={"ID":"4afdf9c5-20b1-4482-a599-36000ac58add","Type":"ContainerStarted","Data":"8872e35144127961d411af3362e3de650e8c5f2e903d889a28f9194d83224a1a"} Jan 23 12:02:49 crc kubenswrapper[4865]: I0123 12:02:49.324707 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" event={"ID":"4afdf9c5-20b1-4482-a599-36000ac58add","Type":"ContainerStarted","Data":"b61bc6ac370c2594a54e1eb8a80debf68803a146090a3170c5c63029ef18baf8"} Jan 23 12:02:50 crc kubenswrapper[4865]: I0123 12:02:50.335845 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" event={"ID":"4afdf9c5-20b1-4482-a599-36000ac58add","Type":"ContainerStarted","Data":"567c37f2491457bc0e07c40745c7436be84af59903e0df8ec78a2209ee94dfb3"} Jan 23 12:02:52 crc kubenswrapper[4865]: I0123 12:02:52.442240 4865 scope.go:117] "RemoveContainer" containerID="9101d6aa6d45e53ba1a927cd50a0f155bf8bc6d2819eaf1206a393f154dcfb06" Jan 23 12:02:53 crc kubenswrapper[4865]: I0123 12:02:53.385299 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" event={"ID":"4afdf9c5-20b1-4482-a599-36000ac58add","Type":"ContainerStarted","Data":"1b1dbd03455707d78d2c252230b88799cf3f0c10b5eaae034a88031857021506"} Jan 23 12:02:53 crc kubenswrapper[4865]: I0123 12:02:53.387094 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cb8rs_b3d06336-44ac-4c17-899b-28cbfe2ee64d/kube-multus/2.log" Jan 23 12:02:56 crc kubenswrapper[4865]: I0123 12:02:56.412138 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" event={"ID":"4afdf9c5-20b1-4482-a599-36000ac58add","Type":"ContainerStarted","Data":"716e211a8789c1a114bf42e0316efe673880d8aa09023a02e003c6aaaea358af"} Jan 23 12:02:56 crc kubenswrapper[4865]: I0123 12:02:56.413018 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:56 crc kubenswrapper[4865]: I0123 12:02:56.413036 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:56 crc kubenswrapper[4865]: I0123 12:02:56.413047 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:56 crc kubenswrapper[4865]: I0123 12:02:56.445105 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:56 crc kubenswrapper[4865]: I0123 12:02:56.448050 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:02:56 crc kubenswrapper[4865]: I0123 12:02:56.485407 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" podStartSLOduration=10.485383651 podStartE2EDuration="10.485383651s" podCreationTimestamp="2026-01-23 12:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:02:56.456184055 +0000 UTC m=+620.625256281" watchObservedRunningTime="2026-01-23 12:02:56.485383651 +0000 UTC m=+620.654455867" Jan 23 12:02:59 crc kubenswrapper[4865]: I0123 12:02:59.117899 4865 scope.go:117] "RemoveContainer" containerID="00a6f6797638587efdf93cfa4a2c2f18b2ee85067681c5118db013881e63b8a4" Jan 23 12:02:59 crc kubenswrapper[4865]: E0123 12:02:59.118509 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cb8rs_openshift-multus(b3d06336-44ac-4c17-899b-28cbfe2ee64d)\"" pod="openshift-multus/multus-cb8rs" podUID="b3d06336-44ac-4c17-899b-28cbfe2ee64d" Jan 23 12:03:10 crc kubenswrapper[4865]: I0123 12:03:10.117920 4865 scope.go:117] "RemoveContainer" containerID="00a6f6797638587efdf93cfa4a2c2f18b2ee85067681c5118db013881e63b8a4" Jan 23 12:03:13 crc kubenswrapper[4865]: I0123 12:03:13.514804 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cb8rs_b3d06336-44ac-4c17-899b-28cbfe2ee64d/kube-multus/2.log" Jan 23 12:03:13 crc kubenswrapper[4865]: I0123 12:03:13.515172 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cb8rs" event={"ID":"b3d06336-44ac-4c17-899b-28cbfe2ee64d","Type":"ContainerStarted","Data":"79e2bb54b3f8cfa7791df30c0b281d20054ba8b86acb31404778d9473f6772d6"} Jan 23 12:03:16 crc kubenswrapper[4865]: I0123 12:03:16.570094 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:03:18 crc kubenswrapper[4865]: I0123 12:03:18.776871 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:03:18 crc kubenswrapper[4865]: I0123 12:03:18.777585 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:03:18 crc kubenswrapper[4865]: I0123 12:03:18.777687 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:03:18 crc kubenswrapper[4865]: I0123 12:03:18.778638 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff98bb889080e3c7f19be17161b36fc32daa7506d4ebfb5788d7c8ff79bcc3ed"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:03:18 crc kubenswrapper[4865]: I0123 12:03:18.778701 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://ff98bb889080e3c7f19be17161b36fc32daa7506d4ebfb5788d7c8ff79bcc3ed" gracePeriod=600 Jan 23 12:03:26 crc kubenswrapper[4865]: I0123 12:03:26.288462 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="ff98bb889080e3c7f19be17161b36fc32daa7506d4ebfb5788d7c8ff79bcc3ed" exitCode=0 Jan 23 12:03:26 crc kubenswrapper[4865]: I0123 12:03:26.288522 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"ff98bb889080e3c7f19be17161b36fc32daa7506d4ebfb5788d7c8ff79bcc3ed"} Jan 23 12:03:26 crc kubenswrapper[4865]: I0123 12:03:26.289792 4865 scope.go:117] "RemoveContainer" containerID="a6b1b0eba2941eeb0825d0b03a6164c492659197949b6b6163a76c28e2d0b61a" Jan 23 12:03:27 crc kubenswrapper[4865]: I0123 12:03:27.299064 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"6d9cd586c30c8b5457d84dc80396ec6c6d5bb6dd4d7eb00e56b29553f41be78a"} Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.464784 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh"] Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.466229 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.469825 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.482096 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh"] Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.597769 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.597840 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbbc\" (UniqueName: \"kubernetes.io/projected/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-kube-api-access-rxbbc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.597903 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.699452 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.699507 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbbc\" (UniqueName: \"kubernetes.io/projected/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-kube-api-access-rxbbc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.699571 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.700331 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.700342 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.726561 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbbc\" (UniqueName: \"kubernetes.io/projected/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-kube-api-access-rxbbc\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.781054 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:29 crc kubenswrapper[4865]: I0123 12:03:29.989413 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh"] Jan 23 12:03:30 crc kubenswrapper[4865]: I0123 12:03:30.316988 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" event={"ID":"986d6557-a0eb-4856-bf2a-9cffeaaec5b6","Type":"ContainerStarted","Data":"87e4b13b168b58f97acf76217d8b050b5294e96cc1a5a05eedf0b7a3a89dcd1f"} Jan 23 12:03:31 crc kubenswrapper[4865]: I0123 12:03:31.326433 4865 generic.go:334] "Generic (PLEG): container finished" podID="986d6557-a0eb-4856-bf2a-9cffeaaec5b6" containerID="ddc63db153967b8db772d3d05ee527550f6b5a9458fedc655e125a75d0e44eab" exitCode=0 Jan 23 12:03:31 crc kubenswrapper[4865]: I0123 12:03:31.326569 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" event={"ID":"986d6557-a0eb-4856-bf2a-9cffeaaec5b6","Type":"ContainerDied","Data":"ddc63db153967b8db772d3d05ee527550f6b5a9458fedc655e125a75d0e44eab"} Jan 23 12:03:33 crc kubenswrapper[4865]: I0123 12:03:33.350964 4865 generic.go:334] "Generic (PLEG): container finished" podID="986d6557-a0eb-4856-bf2a-9cffeaaec5b6" containerID="49a4dcd5f3e99b28e02ad85a4cf64b6aeafcde2603a5620528db60f6538193d6" exitCode=0 Jan 23 12:03:33 crc kubenswrapper[4865]: I0123 12:03:33.351280 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" event={"ID":"986d6557-a0eb-4856-bf2a-9cffeaaec5b6","Type":"ContainerDied","Data":"49a4dcd5f3e99b28e02ad85a4cf64b6aeafcde2603a5620528db60f6538193d6"} Jan 23 12:03:34 crc kubenswrapper[4865]: I0123 12:03:34.362943 4865 generic.go:334] "Generic (PLEG): container finished" podID="986d6557-a0eb-4856-bf2a-9cffeaaec5b6" containerID="c46219688dcda2029a891b9314231367c461163bc20953d6f7addaf9c25429a9" exitCode=0 Jan 23 12:03:34 crc kubenswrapper[4865]: I0123 12:03:34.363067 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" event={"ID":"986d6557-a0eb-4856-bf2a-9cffeaaec5b6","Type":"ContainerDied","Data":"c46219688dcda2029a891b9314231367c461163bc20953d6f7addaf9c25429a9"} Jan 23 12:03:35 crc kubenswrapper[4865]: I0123 12:03:35.819649 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:35 crc kubenswrapper[4865]: I0123 12:03:35.885729 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxbbc\" (UniqueName: \"kubernetes.io/projected/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-kube-api-access-rxbbc\") pod \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " Jan 23 12:03:35 crc kubenswrapper[4865]: I0123 12:03:35.885903 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-bundle\") pod \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " Jan 23 12:03:35 crc kubenswrapper[4865]: I0123 12:03:35.885992 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-util\") pod \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\" (UID: \"986d6557-a0eb-4856-bf2a-9cffeaaec5b6\") " Jan 23 12:03:35 crc kubenswrapper[4865]: I0123 12:03:35.886855 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-bundle" (OuterVolumeSpecName: "bundle") pod "986d6557-a0eb-4856-bf2a-9cffeaaec5b6" (UID: "986d6557-a0eb-4856-bf2a-9cffeaaec5b6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:03:35 crc kubenswrapper[4865]: I0123 12:03:35.888900 4865 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:03:35 crc kubenswrapper[4865]: I0123 12:03:35.897035 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-kube-api-access-rxbbc" (OuterVolumeSpecName: "kube-api-access-rxbbc") pod "986d6557-a0eb-4856-bf2a-9cffeaaec5b6" (UID: "986d6557-a0eb-4856-bf2a-9cffeaaec5b6"). InnerVolumeSpecName "kube-api-access-rxbbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:03:35 crc kubenswrapper[4865]: I0123 12:03:35.990553 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxbbc\" (UniqueName: \"kubernetes.io/projected/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-kube-api-access-rxbbc\") on node \"crc\" DevicePath \"\"" Jan 23 12:03:36 crc kubenswrapper[4865]: I0123 12:03:36.308460 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-util" (OuterVolumeSpecName: "util") pod "986d6557-a0eb-4856-bf2a-9cffeaaec5b6" (UID: "986d6557-a0eb-4856-bf2a-9cffeaaec5b6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:03:36 crc kubenswrapper[4865]: I0123 12:03:36.378822 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" event={"ID":"986d6557-a0eb-4856-bf2a-9cffeaaec5b6","Type":"ContainerDied","Data":"87e4b13b168b58f97acf76217d8b050b5294e96cc1a5a05eedf0b7a3a89dcd1f"} Jan 23 12:03:36 crc kubenswrapper[4865]: I0123 12:03:36.379256 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87e4b13b168b58f97acf76217d8b050b5294e96cc1a5a05eedf0b7a3a89dcd1f" Jan 23 12:03:36 crc kubenswrapper[4865]: I0123 12:03:36.378872 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713df6xh" Jan 23 12:03:36 crc kubenswrapper[4865]: I0123 12:03:36.396668 4865 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/986d6557-a0eb-4856-bf2a-9cffeaaec5b6-util\") on node \"crc\" DevicePath \"\"" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.341238 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-8f8tf"] Jan 23 12:03:38 crc kubenswrapper[4865]: E0123 12:03:38.341438 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986d6557-a0eb-4856-bf2a-9cffeaaec5b6" containerName="pull" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.341450 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="986d6557-a0eb-4856-bf2a-9cffeaaec5b6" containerName="pull" Jan 23 12:03:38 crc kubenswrapper[4865]: E0123 12:03:38.341468 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986d6557-a0eb-4856-bf2a-9cffeaaec5b6" containerName="util" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.341473 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="986d6557-a0eb-4856-bf2a-9cffeaaec5b6" containerName="util" Jan 23 12:03:38 crc kubenswrapper[4865]: E0123 12:03:38.341483 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="986d6557-a0eb-4856-bf2a-9cffeaaec5b6" containerName="extract" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.341488 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="986d6557-a0eb-4856-bf2a-9cffeaaec5b6" containerName="extract" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.341579 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="986d6557-a0eb-4856-bf2a-9cffeaaec5b6" containerName="extract" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.342012 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-8f8tf" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.344461 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.344877 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zrzvh" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.345074 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.359366 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-8f8tf"] Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.429186 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x29mm\" (UniqueName: \"kubernetes.io/projected/9f45c9bc-a59f-44e7-bbfa-8fc1c1fcc7e3-kube-api-access-x29mm\") pod \"nmstate-operator-646758c888-8f8tf\" (UID: \"9f45c9bc-a59f-44e7-bbfa-8fc1c1fcc7e3\") " pod="openshift-nmstate/nmstate-operator-646758c888-8f8tf" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.530131 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x29mm\" (UniqueName: \"kubernetes.io/projected/9f45c9bc-a59f-44e7-bbfa-8fc1c1fcc7e3-kube-api-access-x29mm\") pod \"nmstate-operator-646758c888-8f8tf\" (UID: \"9f45c9bc-a59f-44e7-bbfa-8fc1c1fcc7e3\") " pod="openshift-nmstate/nmstate-operator-646758c888-8f8tf" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.549723 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x29mm\" (UniqueName: \"kubernetes.io/projected/9f45c9bc-a59f-44e7-bbfa-8fc1c1fcc7e3-kube-api-access-x29mm\") pod \"nmstate-operator-646758c888-8f8tf\" (UID: \"9f45c9bc-a59f-44e7-bbfa-8fc1c1fcc7e3\") " pod="openshift-nmstate/nmstate-operator-646758c888-8f8tf" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.654573 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-8f8tf" Jan 23 12:03:38 crc kubenswrapper[4865]: I0123 12:03:38.849373 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-8f8tf"] Jan 23 12:03:39 crc kubenswrapper[4865]: I0123 12:03:39.396420 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-8f8tf" event={"ID":"9f45c9bc-a59f-44e7-bbfa-8fc1c1fcc7e3","Type":"ContainerStarted","Data":"5a39269646078cadd77ed8bd5cd4e44ba2e3dbe0e53fa06a11163583da80e2c8"} Jan 23 12:03:42 crc kubenswrapper[4865]: I0123 12:03:42.412501 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-8f8tf" event={"ID":"9f45c9bc-a59f-44e7-bbfa-8fc1c1fcc7e3","Type":"ContainerStarted","Data":"b8559ad4bf344344be0e15ce44f48eef50221c26ae03e44179a3339653821584"} Jan 23 12:03:42 crc kubenswrapper[4865]: I0123 12:03:42.431542 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-8f8tf" podStartSLOduration=1.410423645 podStartE2EDuration="4.431524085s" podCreationTimestamp="2026-01-23 12:03:38 +0000 UTC" firstStartedPulling="2026-01-23 12:03:38.861950227 +0000 UTC m=+663.031022453" lastFinishedPulling="2026-01-23 12:03:41.883050667 +0000 UTC m=+666.052122893" observedRunningTime="2026-01-23 12:03:42.427839985 +0000 UTC m=+666.596912221" watchObservedRunningTime="2026-01-23 12:03:42.431524085 +0000 UTC m=+666.600596311" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.674205 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zx98c"] Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.675719 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-zx98c" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.678067 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ccls2" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.693588 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5"] Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.694447 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.696414 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.707375 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zx98c"] Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.712264 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5"] Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.783095 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8547q"] Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.784156 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.787704 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xxm7\" (UniqueName: \"kubernetes.io/projected/c6cf7afb-e04b-428e-a9d6-448bec887e7e-kube-api-access-7xxm7\") pod \"nmstate-webhook-8474b5b9d8-qtxv5\" (UID: \"c6cf7afb-e04b-428e-a9d6-448bec887e7e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.787804 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gptgd\" (UniqueName: \"kubernetes.io/projected/1ef2f0c8-cd90-4f20-b925-c8679b165c05-kube-api-access-gptgd\") pod \"nmstate-metrics-54757c584b-zx98c\" (UID: \"1ef2f0c8-cd90-4f20-b925-c8679b165c05\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zx98c" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.787841 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c6cf7afb-e04b-428e-a9d6-448bec887e7e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qtxv5\" (UID: \"c6cf7afb-e04b-428e-a9d6-448bec887e7e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.889471 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fh9k\" (UniqueName: \"kubernetes.io/projected/218b7d21-dfbb-42f7-a115-3867493d97b3-kube-api-access-8fh9k\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.889532 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/218b7d21-dfbb-42f7-a115-3867493d97b3-nmstate-lock\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.889612 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xxm7\" (UniqueName: \"kubernetes.io/projected/c6cf7afb-e04b-428e-a9d6-448bec887e7e-kube-api-access-7xxm7\") pod \"nmstate-webhook-8474b5b9d8-qtxv5\" (UID: \"c6cf7afb-e04b-428e-a9d6-448bec887e7e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.889815 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/218b7d21-dfbb-42f7-a115-3867493d97b3-dbus-socket\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.890112 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gptgd\" (UniqueName: \"kubernetes.io/projected/1ef2f0c8-cd90-4f20-b925-c8679b165c05-kube-api-access-gptgd\") pod \"nmstate-metrics-54757c584b-zx98c\" (UID: \"1ef2f0c8-cd90-4f20-b925-c8679b165c05\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zx98c" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.890143 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c6cf7afb-e04b-428e-a9d6-448bec887e7e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qtxv5\" (UID: \"c6cf7afb-e04b-428e-a9d6-448bec887e7e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.890198 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/218b7d21-dfbb-42f7-a115-3867493d97b3-ovs-socket\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: E0123 12:03:49.890336 4865 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 23 12:03:49 crc kubenswrapper[4865]: E0123 12:03:49.890429 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6cf7afb-e04b-428e-a9d6-448bec887e7e-tls-key-pair podName:c6cf7afb-e04b-428e-a9d6-448bec887e7e nodeName:}" failed. No retries permitted until 2026-01-23 12:03:50.390408714 +0000 UTC m=+674.559480940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c6cf7afb-e04b-428e-a9d6-448bec887e7e-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-qtxv5" (UID: "c6cf7afb-e04b-428e-a9d6-448bec887e7e") : secret "openshift-nmstate-webhook" not found Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.899997 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt"] Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.900959 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.902764 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.903412 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt"] Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.904209 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-txjkp" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.910629 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.953820 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gptgd\" (UniqueName: \"kubernetes.io/projected/1ef2f0c8-cd90-4f20-b925-c8679b165c05-kube-api-access-gptgd\") pod \"nmstate-metrics-54757c584b-zx98c\" (UID: \"1ef2f0c8-cd90-4f20-b925-c8679b165c05\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-zx98c" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.965666 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xxm7\" (UniqueName: \"kubernetes.io/projected/c6cf7afb-e04b-428e-a9d6-448bec887e7e-kube-api-access-7xxm7\") pod \"nmstate-webhook-8474b5b9d8-qtxv5\" (UID: \"c6cf7afb-e04b-428e-a9d6-448bec887e7e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.991015 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ba215d-080e-4aed-acb5-0c01cb2abacc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d5wwt\" (UID: \"f1ba215d-080e-4aed-acb5-0c01cb2abacc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.991077 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fh9k\" (UniqueName: \"kubernetes.io/projected/218b7d21-dfbb-42f7-a115-3867493d97b3-kube-api-access-8fh9k\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.991114 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/218b7d21-dfbb-42f7-a115-3867493d97b3-nmstate-lock\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.991159 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/218b7d21-dfbb-42f7-a115-3867493d97b3-dbus-socket\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.991209 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f1ba215d-080e-4aed-acb5-0c01cb2abacc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d5wwt\" (UID: \"f1ba215d-080e-4aed-acb5-0c01cb2abacc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.991239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45sdk\" (UniqueName: \"kubernetes.io/projected/f1ba215d-080e-4aed-acb5-0c01cb2abacc-kube-api-access-45sdk\") pod \"nmstate-console-plugin-7754f76f8b-d5wwt\" (UID: \"f1ba215d-080e-4aed-acb5-0c01cb2abacc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.991282 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/218b7d21-dfbb-42f7-a115-3867493d97b3-ovs-socket\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.991372 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/218b7d21-dfbb-42f7-a115-3867493d97b3-ovs-socket\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.991738 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/218b7d21-dfbb-42f7-a115-3867493d97b3-nmstate-lock\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:49 crc kubenswrapper[4865]: I0123 12:03:49.992106 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/218b7d21-dfbb-42f7-a115-3867493d97b3-dbus-socket\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.005970 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-zx98c" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.021334 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fh9k\" (UniqueName: \"kubernetes.io/projected/218b7d21-dfbb-42f7-a115-3867493d97b3-kube-api-access-8fh9k\") pod \"nmstate-handler-8547q\" (UID: \"218b7d21-dfbb-42f7-a115-3867493d97b3\") " pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.092158 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f1ba215d-080e-4aed-acb5-0c01cb2abacc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d5wwt\" (UID: \"f1ba215d-080e-4aed-acb5-0c01cb2abacc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.092217 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45sdk\" (UniqueName: \"kubernetes.io/projected/f1ba215d-080e-4aed-acb5-0c01cb2abacc-kube-api-access-45sdk\") pod \"nmstate-console-plugin-7754f76f8b-d5wwt\" (UID: \"f1ba215d-080e-4aed-acb5-0c01cb2abacc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.092266 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ba215d-080e-4aed-acb5-0c01cb2abacc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d5wwt\" (UID: \"f1ba215d-080e-4aed-acb5-0c01cb2abacc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:50 crc kubenswrapper[4865]: E0123 12:03:50.092395 4865 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 23 12:03:50 crc kubenswrapper[4865]: E0123 12:03:50.092447 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1ba215d-080e-4aed-acb5-0c01cb2abacc-plugin-serving-cert podName:f1ba215d-080e-4aed-acb5-0c01cb2abacc nodeName:}" failed. No retries permitted until 2026-01-23 12:03:50.592431873 +0000 UTC m=+674.761504099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/f1ba215d-080e-4aed-acb5-0c01cb2abacc-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-d5wwt" (UID: "f1ba215d-080e-4aed-acb5-0c01cb2abacc") : secret "plugin-serving-cert" not found Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.093274 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/f1ba215d-080e-4aed-acb5-0c01cb2abacc-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-d5wwt\" (UID: \"f1ba215d-080e-4aed-acb5-0c01cb2abacc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.105290 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.116700 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45sdk\" (UniqueName: \"kubernetes.io/projected/f1ba215d-080e-4aed-acb5-0c01cb2abacc-kube-api-access-45sdk\") pod \"nmstate-console-plugin-7754f76f8b-d5wwt\" (UID: \"f1ba215d-080e-4aed-acb5-0c01cb2abacc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:50 crc kubenswrapper[4865]: W0123 12:03:50.146800 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod218b7d21_dfbb_42f7_a115_3867493d97b3.slice/crio-c11d0ffdcb2a6b2df87c481354c61352bf59c5b19bf711ef06ec44bcc9b5d8f2 WatchSource:0}: Error finding container c11d0ffdcb2a6b2df87c481354c61352bf59c5b19bf711ef06ec44bcc9b5d8f2: Status 404 returned error can't find the container with id c11d0ffdcb2a6b2df87c481354c61352bf59c5b19bf711ef06ec44bcc9b5d8f2 Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.150939 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d7d54b946-29gbz"] Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.151757 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.198269 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-console-config\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.198319 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-trusted-ca-bundle\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.198506 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbrnh\" (UniqueName: \"kubernetes.io/projected/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-kube-api-access-vbrnh\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.198594 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-service-ca\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.198671 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-oauth-serving-cert\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.198742 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-console-serving-cert\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.198774 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-console-oauth-config\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.214634 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d7d54b946-29gbz"] Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.300804 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-console-serving-cert\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.300873 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-console-oauth-config\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.300908 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-console-config\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.300932 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-trusted-ca-bundle\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.300974 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbrnh\" (UniqueName: \"kubernetes.io/projected/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-kube-api-access-vbrnh\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.301038 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-service-ca\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.301079 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-oauth-serving-cert\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.302848 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-oauth-serving-cert\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.303245 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-console-config\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.304191 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-trusted-ca-bundle\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.304499 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-service-ca\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.308316 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-console-oauth-config\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.320752 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-console-serving-cert\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.322381 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbrnh\" (UniqueName: \"kubernetes.io/projected/9e2332f2-6e3b-4355-9af1-24a8980c7d8a-kube-api-access-vbrnh\") pod \"console-5d7d54b946-29gbz\" (UID: \"9e2332f2-6e3b-4355-9af1-24a8980c7d8a\") " pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.323690 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-zx98c"] Jan 23 12:03:50 crc kubenswrapper[4865]: W0123 12:03:50.329689 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef2f0c8_cd90_4f20_b925_c8679b165c05.slice/crio-dfaee7f9813c7094dde29abbf7436888c62095043c3a11b648763368e2b8322d WatchSource:0}: Error finding container dfaee7f9813c7094dde29abbf7436888c62095043c3a11b648763368e2b8322d: Status 404 returned error can't find the container with id dfaee7f9813c7094dde29abbf7436888c62095043c3a11b648763368e2b8322d Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.402764 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c6cf7afb-e04b-428e-a9d6-448bec887e7e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qtxv5\" (UID: \"c6cf7afb-e04b-428e-a9d6-448bec887e7e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.407130 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c6cf7afb-e04b-428e-a9d6-448bec887e7e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qtxv5\" (UID: \"c6cf7afb-e04b-428e-a9d6-448bec887e7e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.462471 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8547q" event={"ID":"218b7d21-dfbb-42f7-a115-3867493d97b3","Type":"ContainerStarted","Data":"c11d0ffdcb2a6b2df87c481354c61352bf59c5b19bf711ef06ec44bcc9b5d8f2"} Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.463204 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zx98c" event={"ID":"1ef2f0c8-cd90-4f20-b925-c8679b165c05","Type":"ContainerStarted","Data":"dfaee7f9813c7094dde29abbf7436888c62095043c3a11b648763368e2b8322d"} Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.487193 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.605891 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ba215d-080e-4aed-acb5-0c01cb2abacc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d5wwt\" (UID: \"f1ba215d-080e-4aed-acb5-0c01cb2abacc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.610829 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/f1ba215d-080e-4aed-acb5-0c01cb2abacc-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-d5wwt\" (UID: \"f1ba215d-080e-4aed-acb5-0c01cb2abacc\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.656150 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.706962 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d7d54b946-29gbz"] Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.816839 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" Jan 23 12:03:50 crc kubenswrapper[4865]: I0123 12:03:50.946503 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5"] Jan 23 12:03:50 crc kubenswrapper[4865]: W0123 12:03:50.969807 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6cf7afb_e04b_428e_a9d6_448bec887e7e.slice/crio-ad000f2dfe30dedbd45b4618fcafc8794246bc901e6396c1d82dfb67b02faedd WatchSource:0}: Error finding container ad000f2dfe30dedbd45b4618fcafc8794246bc901e6396c1d82dfb67b02faedd: Status 404 returned error can't find the container with id ad000f2dfe30dedbd45b4618fcafc8794246bc901e6396c1d82dfb67b02faedd Jan 23 12:03:51 crc kubenswrapper[4865]: I0123 12:03:51.082698 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt"] Jan 23 12:03:51 crc kubenswrapper[4865]: W0123 12:03:51.086446 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1ba215d_080e_4aed_acb5_0c01cb2abacc.slice/crio-0c5c3aa68082959c4f344c6f683d533deb8b36cefb5d514f46d9baac242efa38 WatchSource:0}: Error finding container 0c5c3aa68082959c4f344c6f683d533deb8b36cefb5d514f46d9baac242efa38: Status 404 returned error can't find the container with id 0c5c3aa68082959c4f344c6f683d533deb8b36cefb5d514f46d9baac242efa38 Jan 23 12:03:51 crc kubenswrapper[4865]: I0123 12:03:51.484670 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d54b946-29gbz" event={"ID":"9e2332f2-6e3b-4355-9af1-24a8980c7d8a","Type":"ContainerStarted","Data":"02538d47a4f7198d06ac45cdff31ecba4f646e402e14499af85a0e57e26dbec9"} Jan 23 12:03:51 crc kubenswrapper[4865]: I0123 12:03:51.485046 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d54b946-29gbz" event={"ID":"9e2332f2-6e3b-4355-9af1-24a8980c7d8a","Type":"ContainerStarted","Data":"342a11ce94b672809fda00fc48e8ffcd183330256dd639701c4e619163d775b7"} Jan 23 12:03:51 crc kubenswrapper[4865]: I0123 12:03:51.490181 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" event={"ID":"f1ba215d-080e-4aed-acb5-0c01cb2abacc","Type":"ContainerStarted","Data":"0c5c3aa68082959c4f344c6f683d533deb8b36cefb5d514f46d9baac242efa38"} Jan 23 12:03:51 crc kubenswrapper[4865]: I0123 12:03:51.492301 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" event={"ID":"c6cf7afb-e04b-428e-a9d6-448bec887e7e","Type":"ContainerStarted","Data":"ad000f2dfe30dedbd45b4618fcafc8794246bc901e6396c1d82dfb67b02faedd"} Jan 23 12:03:51 crc kubenswrapper[4865]: I0123 12:03:51.509987 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d7d54b946-29gbz" podStartSLOduration=1.509948084 podStartE2EDuration="1.509948084s" podCreationTimestamp="2026-01-23 12:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:03:51.503875285 +0000 UTC m=+675.672947551" watchObservedRunningTime="2026-01-23 12:03:51.509948084 +0000 UTC m=+675.679020350" Jan 23 12:03:53 crc kubenswrapper[4865]: I0123 12:03:53.509765 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zx98c" event={"ID":"1ef2f0c8-cd90-4f20-b925-c8679b165c05","Type":"ContainerStarted","Data":"444ecb3df52447cb8189f1127e65fd4886dda70187e97c2d61fb9bcaea4f60c2"} Jan 23 12:03:54 crc kubenswrapper[4865]: I0123 12:03:54.521121 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8547q" event={"ID":"218b7d21-dfbb-42f7-a115-3867493d97b3","Type":"ContainerStarted","Data":"1017e5951d9e351015d7cb2d3a66014210af00c7475e12180583638e9af6625d"} Jan 23 12:03:54 crc kubenswrapper[4865]: I0123 12:03:54.524841 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:03:54 crc kubenswrapper[4865]: I0123 12:03:54.526999 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" event={"ID":"c6cf7afb-e04b-428e-a9d6-448bec887e7e","Type":"ContainerStarted","Data":"4617bf7a9aff808a67edc9b5c09282bb0655aaeee4e245ca6742ea6da36a2025"} Jan 23 12:03:54 crc kubenswrapper[4865]: I0123 12:03:54.527714 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:03:54 crc kubenswrapper[4865]: I0123 12:03:54.556129 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8547q" podStartSLOduration=2.432912826 podStartE2EDuration="5.556110717s" podCreationTimestamp="2026-01-23 12:03:49 +0000 UTC" firstStartedPulling="2026-01-23 12:03:50.155085489 +0000 UTC m=+674.324157715" lastFinishedPulling="2026-01-23 12:03:53.27828338 +0000 UTC m=+677.447355606" observedRunningTime="2026-01-23 12:03:54.551517215 +0000 UTC m=+678.720589431" watchObservedRunningTime="2026-01-23 12:03:54.556110717 +0000 UTC m=+678.725182943" Jan 23 12:03:54 crc kubenswrapper[4865]: I0123 12:03:54.568720 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" podStartSLOduration=3.1776286320000002 podStartE2EDuration="5.568694225s" podCreationTimestamp="2026-01-23 12:03:49 +0000 UTC" firstStartedPulling="2026-01-23 12:03:50.972066015 +0000 UTC m=+675.141138241" lastFinishedPulling="2026-01-23 12:03:53.363131598 +0000 UTC m=+677.532203834" observedRunningTime="2026-01-23 12:03:54.564438541 +0000 UTC m=+678.733510767" watchObservedRunningTime="2026-01-23 12:03:54.568694225 +0000 UTC m=+678.737766451" Jan 23 12:03:55 crc kubenswrapper[4865]: I0123 12:03:55.534219 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" event={"ID":"f1ba215d-080e-4aed-acb5-0c01cb2abacc","Type":"ContainerStarted","Data":"7e899a0a884f4b7c9dae4e74839d34ac6e299e083058c317d2d6c46dbb2585d3"} Jan 23 12:03:55 crc kubenswrapper[4865]: I0123 12:03:55.557427 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-d5wwt" podStartSLOduration=3.2061498999999998 podStartE2EDuration="6.557399359s" podCreationTimestamp="2026-01-23 12:03:49 +0000 UTC" firstStartedPulling="2026-01-23 12:03:51.089633285 +0000 UTC m=+675.258705511" lastFinishedPulling="2026-01-23 12:03:54.440882734 +0000 UTC m=+678.609954970" observedRunningTime="2026-01-23 12:03:55.555425362 +0000 UTC m=+679.724497588" watchObservedRunningTime="2026-01-23 12:03:55.557399359 +0000 UTC m=+679.726471585" Jan 23 12:03:56 crc kubenswrapper[4865]: I0123 12:03:56.542933 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-zx98c" event={"ID":"1ef2f0c8-cd90-4f20-b925-c8679b165c05","Type":"ContainerStarted","Data":"70db3d1d2916bf96dcf45fd153f6f1f94c7f69b1ee8655678e464b2c0151fa82"} Jan 23 12:03:56 crc kubenswrapper[4865]: I0123 12:03:56.566685 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-zx98c" podStartSLOduration=2.272059274 podStartE2EDuration="7.566666727s" podCreationTimestamp="2026-01-23 12:03:49 +0000 UTC" firstStartedPulling="2026-01-23 12:03:50.333025018 +0000 UTC m=+674.502097244" lastFinishedPulling="2026-01-23 12:03:55.627632471 +0000 UTC m=+679.796704697" observedRunningTime="2026-01-23 12:03:56.559983753 +0000 UTC m=+680.729055979" watchObservedRunningTime="2026-01-23 12:03:56.566666727 +0000 UTC m=+680.735738953" Jan 23 12:04:00 crc kubenswrapper[4865]: I0123 12:04:00.125951 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:04:00 crc kubenswrapper[4865]: I0123 12:04:00.488261 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:04:00 crc kubenswrapper[4865]: I0123 12:04:00.488313 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:04:00 crc kubenswrapper[4865]: I0123 12:04:00.497175 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:04:00 crc kubenswrapper[4865]: I0123 12:04:00.573386 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:04:00 crc kubenswrapper[4865]: I0123 12:04:00.636862 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bpdjt"] Jan 23 12:04:10 crc kubenswrapper[4865]: I0123 12:04:10.664903 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.675290 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6"] Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.677252 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.679299 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.723593 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6"] Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.789035 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8xxf\" (UniqueName: \"kubernetes.io/projected/5940e10f-af18-446f-b651-032799eef129-kube-api-access-g8xxf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.789127 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.789157 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.890715 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.890764 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.890814 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8xxf\" (UniqueName: \"kubernetes.io/projected/5940e10f-af18-446f-b651-032799eef129-kube-api-access-g8xxf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.892020 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.892484 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:23 crc kubenswrapper[4865]: I0123 12:04:23.919073 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8xxf\" (UniqueName: \"kubernetes.io/projected/5940e10f-af18-446f-b651-032799eef129-kube-api-access-g8xxf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:24 crc kubenswrapper[4865]: I0123 12:04:24.002446 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:24 crc kubenswrapper[4865]: I0123 12:04:24.232090 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6"] Jan 23 12:04:24 crc kubenswrapper[4865]: I0123 12:04:24.734090 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" event={"ID":"5940e10f-af18-446f-b651-032799eef129","Type":"ContainerStarted","Data":"69ff32e8b7abd8036ff1a4a87d4340192dc5a5809c066e04f5517031361bdee2"} Jan 23 12:04:24 crc kubenswrapper[4865]: I0123 12:04:24.734421 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" event={"ID":"5940e10f-af18-446f-b651-032799eef129","Type":"ContainerStarted","Data":"e85028c76e325627d15c067f90011f1fb58ebef0b728a1fba68ea2834d76a46d"} Jan 23 12:04:24 crc kubenswrapper[4865]: E0123 12:04:24.875913 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5940e10f_af18_446f_b651_032799eef129.slice/crio-69ff32e8b7abd8036ff1a4a87d4340192dc5a5809c066e04f5517031361bdee2.scope\": RecentStats: unable to find data in memory cache]" Jan 23 12:04:25 crc kubenswrapper[4865]: I0123 12:04:25.682702 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-bpdjt" podUID="34e16446-9445-4646-bf3b-08764f77f949" containerName="console" containerID="cri-o://c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36" gracePeriod=15 Jan 23 12:04:25 crc kubenswrapper[4865]: I0123 12:04:25.744380 4865 generic.go:334] "Generic (PLEG): container finished" podID="5940e10f-af18-446f-b651-032799eef129" containerID="69ff32e8b7abd8036ff1a4a87d4340192dc5a5809c066e04f5517031361bdee2" exitCode=0 Jan 23 12:04:25 crc kubenswrapper[4865]: I0123 12:04:25.744448 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" event={"ID":"5940e10f-af18-446f-b651-032799eef129","Type":"ContainerDied","Data":"69ff32e8b7abd8036ff1a4a87d4340192dc5a5809c066e04f5517031361bdee2"} Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.031417 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bpdjt_34e16446-9445-4646-bf3b-08764f77f949/console/0.log" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.031503 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.224496 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-service-ca\") pod \"34e16446-9445-4646-bf3b-08764f77f949\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.224637 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k665n\" (UniqueName: \"kubernetes.io/projected/34e16446-9445-4646-bf3b-08764f77f949-kube-api-access-k665n\") pod \"34e16446-9445-4646-bf3b-08764f77f949\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.224716 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-console-config\") pod \"34e16446-9445-4646-bf3b-08764f77f949\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.224784 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-trusted-ca-bundle\") pod \"34e16446-9445-4646-bf3b-08764f77f949\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.224847 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-oauth-serving-cert\") pod \"34e16446-9445-4646-bf3b-08764f77f949\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.224874 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-serving-cert\") pod \"34e16446-9445-4646-bf3b-08764f77f949\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.224937 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-oauth-config\") pod \"34e16446-9445-4646-bf3b-08764f77f949\" (UID: \"34e16446-9445-4646-bf3b-08764f77f949\") " Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.225683 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-console-config" (OuterVolumeSpecName: "console-config") pod "34e16446-9445-4646-bf3b-08764f77f949" (UID: "34e16446-9445-4646-bf3b-08764f77f949"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.225784 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "34e16446-9445-4646-bf3b-08764f77f949" (UID: "34e16446-9445-4646-bf3b-08764f77f949"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.225792 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "34e16446-9445-4646-bf3b-08764f77f949" (UID: "34e16446-9445-4646-bf3b-08764f77f949"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.226177 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-service-ca" (OuterVolumeSpecName: "service-ca") pod "34e16446-9445-4646-bf3b-08764f77f949" (UID: "34e16446-9445-4646-bf3b-08764f77f949"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.231043 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "34e16446-9445-4646-bf3b-08764f77f949" (UID: "34e16446-9445-4646-bf3b-08764f77f949"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.231638 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "34e16446-9445-4646-bf3b-08764f77f949" (UID: "34e16446-9445-4646-bf3b-08764f77f949"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.233218 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e16446-9445-4646-bf3b-08764f77f949-kube-api-access-k665n" (OuterVolumeSpecName: "kube-api-access-k665n") pod "34e16446-9445-4646-bf3b-08764f77f949" (UID: "34e16446-9445-4646-bf3b-08764f77f949"). InnerVolumeSpecName "kube-api-access-k665n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.326749 4865 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.326781 4865 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.326790 4865 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.326801 4865 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/34e16446-9445-4646-bf3b-08764f77f949-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.326809 4865 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.326817 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k665n\" (UniqueName: \"kubernetes.io/projected/34e16446-9445-4646-bf3b-08764f77f949-kube-api-access-k665n\") on node \"crc\" DevicePath \"\"" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.326827 4865 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/34e16446-9445-4646-bf3b-08764f77f949-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.754437 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bpdjt_34e16446-9445-4646-bf3b-08764f77f949/console/0.log" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.754768 4865 generic.go:334] "Generic (PLEG): container finished" podID="34e16446-9445-4646-bf3b-08764f77f949" containerID="c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36" exitCode=2 Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.754845 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bpdjt" event={"ID":"34e16446-9445-4646-bf3b-08764f77f949","Type":"ContainerDied","Data":"c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36"} Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.754874 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bpdjt" event={"ID":"34e16446-9445-4646-bf3b-08764f77f949","Type":"ContainerDied","Data":"1e4475de300ed1208deefe344a3dbd534eb2cf9438ea074ada0a64a5c3a33026"} Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.754891 4865 scope.go:117] "RemoveContainer" containerID="c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.755034 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bpdjt" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.782517 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bpdjt"] Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.788107 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-bpdjt"] Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.788389 4865 scope.go:117] "RemoveContainer" containerID="c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36" Jan 23 12:04:26 crc kubenswrapper[4865]: E0123 12:04:26.788941 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36\": container with ID starting with c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36 not found: ID does not exist" containerID="c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36" Jan 23 12:04:26 crc kubenswrapper[4865]: I0123 12:04:26.788975 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36"} err="failed to get container status \"c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36\": rpc error: code = NotFound desc = could not find container \"c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36\": container with ID starting with c70ed3339812f66e6795baf252860f50240a3097ce81c0753f0ed369caa5ca36 not found: ID does not exist" Jan 23 12:04:28 crc kubenswrapper[4865]: I0123 12:04:28.127707 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e16446-9445-4646-bf3b-08764f77f949" path="/var/lib/kubelet/pods/34e16446-9445-4646-bf3b-08764f77f949/volumes" Jan 23 12:04:28 crc kubenswrapper[4865]: I0123 12:04:28.776017 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" event={"ID":"5940e10f-af18-446f-b651-032799eef129","Type":"ContainerStarted","Data":"cfa61b53b7000009dd8f68393c4916c43a36a22a63e3b341f5585b5ac6865769"} Jan 23 12:04:29 crc kubenswrapper[4865]: I0123 12:04:29.787418 4865 generic.go:334] "Generic (PLEG): container finished" podID="5940e10f-af18-446f-b651-032799eef129" containerID="cfa61b53b7000009dd8f68393c4916c43a36a22a63e3b341f5585b5ac6865769" exitCode=0 Jan 23 12:04:29 crc kubenswrapper[4865]: I0123 12:04:29.787469 4865 generic.go:334] "Generic (PLEG): container finished" podID="5940e10f-af18-446f-b651-032799eef129" containerID="3236a9b7ccbd83d236390477c42337fa71710955e6a5612b44f36cbcd22e476f" exitCode=0 Jan 23 12:04:29 crc kubenswrapper[4865]: I0123 12:04:29.787502 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" event={"ID":"5940e10f-af18-446f-b651-032799eef129","Type":"ContainerDied","Data":"cfa61b53b7000009dd8f68393c4916c43a36a22a63e3b341f5585b5ac6865769"} Jan 23 12:04:29 crc kubenswrapper[4865]: I0123 12:04:29.787542 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" event={"ID":"5940e10f-af18-446f-b651-032799eef129","Type":"ContainerDied","Data":"3236a9b7ccbd83d236390477c42337fa71710955e6a5612b44f36cbcd22e476f"} Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.081139 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.092536 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-util\") pod \"5940e10f-af18-446f-b651-032799eef129\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.092589 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-bundle\") pod \"5940e10f-af18-446f-b651-032799eef129\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.092680 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8xxf\" (UniqueName: \"kubernetes.io/projected/5940e10f-af18-446f-b651-032799eef129-kube-api-access-g8xxf\") pod \"5940e10f-af18-446f-b651-032799eef129\" (UID: \"5940e10f-af18-446f-b651-032799eef129\") " Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.093653 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-bundle" (OuterVolumeSpecName: "bundle") pod "5940e10f-af18-446f-b651-032799eef129" (UID: "5940e10f-af18-446f-b651-032799eef129"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.098780 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5940e10f-af18-446f-b651-032799eef129-kube-api-access-g8xxf" (OuterVolumeSpecName: "kube-api-access-g8xxf") pod "5940e10f-af18-446f-b651-032799eef129" (UID: "5940e10f-af18-446f-b651-032799eef129"). InnerVolumeSpecName "kube-api-access-g8xxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.115339 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-util" (OuterVolumeSpecName: "util") pod "5940e10f-af18-446f-b651-032799eef129" (UID: "5940e10f-af18-446f-b651-032799eef129"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.194037 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8xxf\" (UniqueName: \"kubernetes.io/projected/5940e10f-af18-446f-b651-032799eef129-kube-api-access-g8xxf\") on node \"crc\" DevicePath \"\"" Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.194082 4865 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-util\") on node \"crc\" DevicePath \"\"" Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.194097 4865 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5940e10f-af18-446f-b651-032799eef129-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.806261 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" event={"ID":"5940e10f-af18-446f-b651-032799eef129","Type":"ContainerDied","Data":"e85028c76e325627d15c067f90011f1fb58ebef0b728a1fba68ea2834d76a46d"} Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.806338 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e85028c76e325627d15c067f90011f1fb58ebef0b728a1fba68ea2834d76a46d" Jan 23 12:04:31 crc kubenswrapper[4865]: I0123 12:04:31.806457 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcrw4b6" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.817652 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b"] Jan 23 12:04:41 crc kubenswrapper[4865]: E0123 12:04:41.820395 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e16446-9445-4646-bf3b-08764f77f949" containerName="console" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.820472 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e16446-9445-4646-bf3b-08764f77f949" containerName="console" Jan 23 12:04:41 crc kubenswrapper[4865]: E0123 12:04:41.820525 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5940e10f-af18-446f-b651-032799eef129" containerName="pull" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.820573 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5940e10f-af18-446f-b651-032799eef129" containerName="pull" Jan 23 12:04:41 crc kubenswrapper[4865]: E0123 12:04:41.820642 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5940e10f-af18-446f-b651-032799eef129" containerName="util" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.820723 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5940e10f-af18-446f-b651-032799eef129" containerName="util" Jan 23 12:04:41 crc kubenswrapper[4865]: E0123 12:04:41.820786 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5940e10f-af18-446f-b651-032799eef129" containerName="extract" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.820836 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5940e10f-af18-446f-b651-032799eef129" containerName="extract" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.820997 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5940e10f-af18-446f-b651-032799eef129" containerName="extract" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.821081 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e16446-9445-4646-bf3b-08764f77f949" containerName="console" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.821825 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.824745 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.824939 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.825126 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.826569 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zjbk7" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.826925 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.849314 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b"] Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.948362 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1a0503d-3fc4-45b6-87c0-7af4a7246a4b-webhook-cert\") pod \"metallb-operator-controller-manager-7df9698d5d-lk94b\" (UID: \"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b\") " pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.948468 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjwqh\" (UniqueName: \"kubernetes.io/projected/d1a0503d-3fc4-45b6-87c0-7af4a7246a4b-kube-api-access-vjwqh\") pod \"metallb-operator-controller-manager-7df9698d5d-lk94b\" (UID: \"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b\") " pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:41 crc kubenswrapper[4865]: I0123 12:04:41.948503 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1a0503d-3fc4-45b6-87c0-7af4a7246a4b-apiservice-cert\") pod \"metallb-operator-controller-manager-7df9698d5d-lk94b\" (UID: \"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b\") " pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.049203 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1a0503d-3fc4-45b6-87c0-7af4a7246a4b-webhook-cert\") pod \"metallb-operator-controller-manager-7df9698d5d-lk94b\" (UID: \"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b\") " pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.049281 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjwqh\" (UniqueName: \"kubernetes.io/projected/d1a0503d-3fc4-45b6-87c0-7af4a7246a4b-kube-api-access-vjwqh\") pod \"metallb-operator-controller-manager-7df9698d5d-lk94b\" (UID: \"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b\") " pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.049306 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1a0503d-3fc4-45b6-87c0-7af4a7246a4b-apiservice-cert\") pod \"metallb-operator-controller-manager-7df9698d5d-lk94b\" (UID: \"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b\") " pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.057791 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d1a0503d-3fc4-45b6-87c0-7af4a7246a4b-apiservice-cert\") pod \"metallb-operator-controller-manager-7df9698d5d-lk94b\" (UID: \"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b\") " pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.059156 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d1a0503d-3fc4-45b6-87c0-7af4a7246a4b-webhook-cert\") pod \"metallb-operator-controller-manager-7df9698d5d-lk94b\" (UID: \"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b\") " pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.087213 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjwqh\" (UniqueName: \"kubernetes.io/projected/d1a0503d-3fc4-45b6-87c0-7af4a7246a4b-kube-api-access-vjwqh\") pod \"metallb-operator-controller-manager-7df9698d5d-lk94b\" (UID: \"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b\") " pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.141102 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.295095 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg"] Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.295721 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.308335 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.318486 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hlqct" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.318498 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.343172 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg"] Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.454789 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9177b0d0-3ce7-40fe-8567-85cb8dd5227a-webhook-cert\") pod \"metallb-operator-webhook-server-78f5776895-s7hqg\" (UID: \"9177b0d0-3ce7-40fe-8567-85cb8dd5227a\") " pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.455030 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9177b0d0-3ce7-40fe-8567-85cb8dd5227a-apiservice-cert\") pod \"metallb-operator-webhook-server-78f5776895-s7hqg\" (UID: \"9177b0d0-3ce7-40fe-8567-85cb8dd5227a\") " pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.455108 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8kvf\" (UniqueName: \"kubernetes.io/projected/9177b0d0-3ce7-40fe-8567-85cb8dd5227a-kube-api-access-b8kvf\") pod \"metallb-operator-webhook-server-78f5776895-s7hqg\" (UID: \"9177b0d0-3ce7-40fe-8567-85cb8dd5227a\") " pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.478726 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b"] Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.556279 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9177b0d0-3ce7-40fe-8567-85cb8dd5227a-apiservice-cert\") pod \"metallb-operator-webhook-server-78f5776895-s7hqg\" (UID: \"9177b0d0-3ce7-40fe-8567-85cb8dd5227a\") " pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.556696 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8kvf\" (UniqueName: \"kubernetes.io/projected/9177b0d0-3ce7-40fe-8567-85cb8dd5227a-kube-api-access-b8kvf\") pod \"metallb-operator-webhook-server-78f5776895-s7hqg\" (UID: \"9177b0d0-3ce7-40fe-8567-85cb8dd5227a\") " pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.556741 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9177b0d0-3ce7-40fe-8567-85cb8dd5227a-webhook-cert\") pod \"metallb-operator-webhook-server-78f5776895-s7hqg\" (UID: \"9177b0d0-3ce7-40fe-8567-85cb8dd5227a\") " pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.561053 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9177b0d0-3ce7-40fe-8567-85cb8dd5227a-webhook-cert\") pod \"metallb-operator-webhook-server-78f5776895-s7hqg\" (UID: \"9177b0d0-3ce7-40fe-8567-85cb8dd5227a\") " pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.561180 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9177b0d0-3ce7-40fe-8567-85cb8dd5227a-apiservice-cert\") pod \"metallb-operator-webhook-server-78f5776895-s7hqg\" (UID: \"9177b0d0-3ce7-40fe-8567-85cb8dd5227a\") " pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.573954 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8kvf\" (UniqueName: \"kubernetes.io/projected/9177b0d0-3ce7-40fe-8567-85cb8dd5227a-kube-api-access-b8kvf\") pod \"metallb-operator-webhook-server-78f5776895-s7hqg\" (UID: \"9177b0d0-3ce7-40fe-8567-85cb8dd5227a\") " pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.653952 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.885242 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" event={"ID":"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b","Type":"ContainerStarted","Data":"b6205498092e4163c847f27dc61887aab6a597954bab414316c8e0f41940183d"} Jan 23 12:04:42 crc kubenswrapper[4865]: I0123 12:04:42.956197 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg"] Jan 23 12:04:42 crc kubenswrapper[4865]: W0123 12:04:42.964681 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9177b0d0_3ce7_40fe_8567_85cb8dd5227a.slice/crio-6d711fcc3d3a2ac403c81d4b857b92362eb428c1f85b2b3723b845b712a7596f WatchSource:0}: Error finding container 6d711fcc3d3a2ac403c81d4b857b92362eb428c1f85b2b3723b845b712a7596f: Status 404 returned error can't find the container with id 6d711fcc3d3a2ac403c81d4b857b92362eb428c1f85b2b3723b845b712a7596f Jan 23 12:04:43 crc kubenswrapper[4865]: I0123 12:04:43.892241 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" event={"ID":"9177b0d0-3ce7-40fe-8567-85cb8dd5227a","Type":"ContainerStarted","Data":"6d711fcc3d3a2ac403c81d4b857b92362eb428c1f85b2b3723b845b712a7596f"} Jan 23 12:04:48 crc kubenswrapper[4865]: I0123 12:04:48.923045 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" event={"ID":"9177b0d0-3ce7-40fe-8567-85cb8dd5227a","Type":"ContainerStarted","Data":"d3d1a0d7a2dfbb419198472561c8b84f95b853d7374fc21ee4c10bfa5a6a34a1"} Jan 23 12:04:48 crc kubenswrapper[4865]: I0123 12:04:48.923591 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:04:48 crc kubenswrapper[4865]: I0123 12:04:48.925659 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" event={"ID":"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b","Type":"ContainerStarted","Data":"970fbd29e7ad027b715d5162c082d4b785da78e0c7cbe380974c284c1f434308"} Jan 23 12:04:48 crc kubenswrapper[4865]: I0123 12:04:48.925944 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:04:48 crc kubenswrapper[4865]: I0123 12:04:48.944946 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podStartSLOduration=1.154254003 podStartE2EDuration="6.944926999s" podCreationTimestamp="2026-01-23 12:04:42 +0000 UTC" firstStartedPulling="2026-01-23 12:04:42.967785904 +0000 UTC m=+727.136858130" lastFinishedPulling="2026-01-23 12:04:48.7584589 +0000 UTC m=+732.927531126" observedRunningTime="2026-01-23 12:04:48.942404637 +0000 UTC m=+733.111476883" watchObservedRunningTime="2026-01-23 12:04:48.944926999 +0000 UTC m=+733.113999225" Jan 23 12:04:48 crc kubenswrapper[4865]: I0123 12:04:48.980900 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podStartSLOduration=1.734259022 podStartE2EDuration="7.980881239s" podCreationTimestamp="2026-01-23 12:04:41 +0000 UTC" firstStartedPulling="2026-01-23 12:04:42.496090017 +0000 UTC m=+726.665162243" lastFinishedPulling="2026-01-23 12:04:48.742712234 +0000 UTC m=+732.911784460" observedRunningTime="2026-01-23 12:04:48.978745977 +0000 UTC m=+733.147818203" watchObservedRunningTime="2026-01-23 12:04:48.980881239 +0000 UTC m=+733.149953455" Jan 23 12:05:02 crc kubenswrapper[4865]: I0123 12:05:02.659504 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:05:10 crc kubenswrapper[4865]: I0123 12:05:10.376984 4865 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.148941 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.857201 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gh89m"] Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.859363 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.866895 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-kmcjf" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.867144 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.867246 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.877871 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4"] Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.878801 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.880424 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.904482 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4"] Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.979254 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-frr-conf\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.979305 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4116044f-0cc3-41fb-9f26-536213e1dfa3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dkvk4\" (UID: \"4116044f-0cc3-41fb-9f26-536213e1dfa3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.979328 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfjbj\" (UniqueName: \"kubernetes.io/projected/4116044f-0cc3-41fb-9f26-536213e1dfa3-kube-api-access-qfjbj\") pod \"frr-k8s-webhook-server-7df86c4f6c-dkvk4\" (UID: \"4116044f-0cc3-41fb-9f26-536213e1dfa3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.979350 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-reloader\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.979371 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-metrics\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.979396 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9faffae5-73bb-4980-8092-b79a6888476d-frr-startup\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.979411 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9faffae5-73bb-4980-8092-b79a6888476d-metrics-certs\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.979438 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-frr-sockets\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.979456 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hv8z\" (UniqueName: \"kubernetes.io/projected/9faffae5-73bb-4980-8092-b79a6888476d-kube-api-access-5hv8z\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.987555 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-szb9h"] Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.988441 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-szb9h" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.990431 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.990607 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.990782 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-799xn" Jan 23 12:05:22 crc kubenswrapper[4865]: I0123 12:05:22.990975 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.021935 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-8bjkz"] Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.023477 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.025720 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.032955 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8bjkz"] Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.080938 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-memberlist\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081048 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-frr-conf\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081088 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4116044f-0cc3-41fb-9f26-536213e1dfa3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dkvk4\" (UID: \"4116044f-0cc3-41fb-9f26-536213e1dfa3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081107 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfjbj\" (UniqueName: \"kubernetes.io/projected/4116044f-0cc3-41fb-9f26-536213e1dfa3-kube-api-access-qfjbj\") pod \"frr-k8s-webhook-server-7df86c4f6c-dkvk4\" (UID: \"4116044f-0cc3-41fb-9f26-536213e1dfa3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:23 crc kubenswrapper[4865]: E0123 12:05:23.081209 4865 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 23 12:05:23 crc kubenswrapper[4865]: E0123 12:05:23.081257 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4116044f-0cc3-41fb-9f26-536213e1dfa3-cert podName:4116044f-0cc3-41fb-9f26-536213e1dfa3 nodeName:}" failed. No retries permitted until 2026-01-23 12:05:23.581240058 +0000 UTC m=+767.750312284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4116044f-0cc3-41fb-9f26-536213e1dfa3-cert") pod "frr-k8s-webhook-server-7df86c4f6c-dkvk4" (UID: "4116044f-0cc3-41fb-9f26-536213e1dfa3") : secret "frr-k8s-webhook-server-cert" not found Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081127 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-reloader\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081462 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-metrics\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081526 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-frr-conf\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081683 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-reloader\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081808 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-metrics\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081869 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3dee20a9-c14d-4a42-afb1-87d126996c56-metallb-excludel2\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081900 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9faffae5-73bb-4980-8092-b79a6888476d-frr-startup\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081953 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nzh\" (UniqueName: \"kubernetes.io/projected/3dee20a9-c14d-4a42-afb1-87d126996c56-kube-api-access-m5nzh\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.081976 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9faffae5-73bb-4980-8092-b79a6888476d-metrics-certs\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.082693 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/9faffae5-73bb-4980-8092-b79a6888476d-frr-startup\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: E0123 12:05:23.082783 4865 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 23 12:05:23 crc kubenswrapper[4865]: E0123 12:05:23.083059 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9faffae5-73bb-4980-8092-b79a6888476d-metrics-certs podName:9faffae5-73bb-4980-8092-b79a6888476d nodeName:}" failed. No retries permitted until 2026-01-23 12:05:23.583050642 +0000 UTC m=+767.752122868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9faffae5-73bb-4980-8092-b79a6888476d-metrics-certs") pod "frr-k8s-gh89m" (UID: "9faffae5-73bb-4980-8092-b79a6888476d") : secret "frr-k8s-certs-secret" not found Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.083005 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-frr-sockets\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.082843 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/9faffae5-73bb-4980-8092-b79a6888476d-frr-sockets\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.083131 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-metrics-certs\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.083156 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hv8z\" (UniqueName: \"kubernetes.io/projected/9faffae5-73bb-4980-8092-b79a6888476d-kube-api-access-5hv8z\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.112729 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hv8z\" (UniqueName: \"kubernetes.io/projected/9faffae5-73bb-4980-8092-b79a6888476d-kube-api-access-5hv8z\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.125388 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfjbj\" (UniqueName: \"kubernetes.io/projected/4116044f-0cc3-41fb-9f26-536213e1dfa3-kube-api-access-qfjbj\") pod \"frr-k8s-webhook-server-7df86c4f6c-dkvk4\" (UID: \"4116044f-0cc3-41fb-9f26-536213e1dfa3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.184645 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jdrm\" (UniqueName: \"kubernetes.io/projected/3685d2b2-151b-479a-92c1-ae400eacd1b9-kube-api-access-6jdrm\") pod \"controller-6968d8fdc4-8bjkz\" (UID: \"3685d2b2-151b-479a-92c1-ae400eacd1b9\") " pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.184753 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nzh\" (UniqueName: \"kubernetes.io/projected/3dee20a9-c14d-4a42-afb1-87d126996c56-kube-api-access-m5nzh\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.184811 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-metrics-certs\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.184847 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-memberlist\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.184863 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3685d2b2-151b-479a-92c1-ae400eacd1b9-cert\") pod \"controller-6968d8fdc4-8bjkz\" (UID: \"3685d2b2-151b-479a-92c1-ae400eacd1b9\") " pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.184885 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3685d2b2-151b-479a-92c1-ae400eacd1b9-metrics-certs\") pod \"controller-6968d8fdc4-8bjkz\" (UID: \"3685d2b2-151b-479a-92c1-ae400eacd1b9\") " pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.184925 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3dee20a9-c14d-4a42-afb1-87d126996c56-metallb-excludel2\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: E0123 12:05:23.184967 4865 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 12:05:23 crc kubenswrapper[4865]: E0123 12:05:23.185017 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-memberlist podName:3dee20a9-c14d-4a42-afb1-87d126996c56 nodeName:}" failed. No retries permitted until 2026-01-23 12:05:23.68500103 +0000 UTC m=+767.854073256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-memberlist") pod "speaker-szb9h" (UID: "3dee20a9-c14d-4a42-afb1-87d126996c56") : secret "metallb-memberlist" not found Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.185653 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3dee20a9-c14d-4a42-afb1-87d126996c56-metallb-excludel2\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.200110 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-metrics-certs\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.224134 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nzh\" (UniqueName: \"kubernetes.io/projected/3dee20a9-c14d-4a42-afb1-87d126996c56-kube-api-access-m5nzh\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.286876 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3685d2b2-151b-479a-92c1-ae400eacd1b9-cert\") pod \"controller-6968d8fdc4-8bjkz\" (UID: \"3685d2b2-151b-479a-92c1-ae400eacd1b9\") " pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.286965 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3685d2b2-151b-479a-92c1-ae400eacd1b9-metrics-certs\") pod \"controller-6968d8fdc4-8bjkz\" (UID: \"3685d2b2-151b-479a-92c1-ae400eacd1b9\") " pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.287438 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jdrm\" (UniqueName: \"kubernetes.io/projected/3685d2b2-151b-479a-92c1-ae400eacd1b9-kube-api-access-6jdrm\") pod \"controller-6968d8fdc4-8bjkz\" (UID: \"3685d2b2-151b-479a-92c1-ae400eacd1b9\") " pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.289805 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3685d2b2-151b-479a-92c1-ae400eacd1b9-cert\") pod \"controller-6968d8fdc4-8bjkz\" (UID: \"3685d2b2-151b-479a-92c1-ae400eacd1b9\") " pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.290084 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3685d2b2-151b-479a-92c1-ae400eacd1b9-metrics-certs\") pod \"controller-6968d8fdc4-8bjkz\" (UID: \"3685d2b2-151b-479a-92c1-ae400eacd1b9\") " pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.335001 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jdrm\" (UniqueName: \"kubernetes.io/projected/3685d2b2-151b-479a-92c1-ae400eacd1b9-kube-api-access-6jdrm\") pod \"controller-6968d8fdc4-8bjkz\" (UID: \"3685d2b2-151b-479a-92c1-ae400eacd1b9\") " pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.338060 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.591650 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4116044f-0cc3-41fb-9f26-536213e1dfa3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dkvk4\" (UID: \"4116044f-0cc3-41fb-9f26-536213e1dfa3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.591735 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9faffae5-73bb-4980-8092-b79a6888476d-metrics-certs\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.596996 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9faffae5-73bb-4980-8092-b79a6888476d-metrics-certs\") pod \"frr-k8s-gh89m\" (UID: \"9faffae5-73bb-4980-8092-b79a6888476d\") " pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.599498 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4116044f-0cc3-41fb-9f26-536213e1dfa3-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dkvk4\" (UID: \"4116044f-0cc3-41fb-9f26-536213e1dfa3\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.604417 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8bjkz"] Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.693087 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-memberlist\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:23 crc kubenswrapper[4865]: E0123 12:05:23.693405 4865 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 12:05:23 crc kubenswrapper[4865]: E0123 12:05:23.693502 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-memberlist podName:3dee20a9-c14d-4a42-afb1-87d126996c56 nodeName:}" failed. No retries permitted until 2026-01-23 12:05:24.693480228 +0000 UTC m=+768.862552454 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-memberlist") pod "speaker-szb9h" (UID: "3dee20a9-c14d-4a42-afb1-87d126996c56") : secret "metallb-memberlist" not found Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.775651 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:23 crc kubenswrapper[4865]: I0123 12:05:23.791828 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:24 crc kubenswrapper[4865]: I0123 12:05:24.023546 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4"] Jan 23 12:05:24 crc kubenswrapper[4865]: I0123 12:05:24.126363 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerStarted","Data":"e52ff30053d198dfc1b513a24f5ee29bef73e7261fdb289b82517df9e6149105"} Jan 23 12:05:24 crc kubenswrapper[4865]: I0123 12:05:24.126408 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"2973fcd141313fe86a3c59104d6115a1e026f080f5893393e35d2b2e972fac00"} Jan 23 12:05:24 crc kubenswrapper[4865]: I0123 12:05:24.126423 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerStarted","Data":"b4621231a3db037226a2e80d42bddb5f8bd9241b34c6fc52924695c17633820f"} Jan 23 12:05:24 crc kubenswrapper[4865]: I0123 12:05:24.126447 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:24 crc kubenswrapper[4865]: I0123 12:05:24.126460 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerStarted","Data":"16c3ee308f4ea038e0db292673884d80dda1fbd5964d94cb29ddd5c2ddaa1043"} Jan 23 12:05:24 crc kubenswrapper[4865]: I0123 12:05:24.126471 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerStarted","Data":"641219b82fc056c107cea0f8ea83eab34465584abf4c9115a6fd8f1448e59f2f"} Jan 23 12:05:24 crc kubenswrapper[4865]: I0123 12:05:24.713254 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-memberlist\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:24 crc kubenswrapper[4865]: I0123 12:05:24.722498 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3dee20a9-c14d-4a42-afb1-87d126996c56-memberlist\") pod \"speaker-szb9h\" (UID: \"3dee20a9-c14d-4a42-afb1-87d126996c56\") " pod="metallb-system/speaker-szb9h" Jan 23 12:05:24 crc kubenswrapper[4865]: I0123 12:05:24.802213 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-szb9h" Jan 23 12:05:24 crc kubenswrapper[4865]: W0123 12:05:24.826417 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dee20a9_c14d_4a42_afb1_87d126996c56.slice/crio-ec3a869fe840abf0b396bec441ef74528af4d8ec1edfac6fa41cff8974a6c11d WatchSource:0}: Error finding container ec3a869fe840abf0b396bec441ef74528af4d8ec1edfac6fa41cff8974a6c11d: Status 404 returned error can't find the container with id ec3a869fe840abf0b396bec441ef74528af4d8ec1edfac6fa41cff8974a6c11d Jan 23 12:05:25 crc kubenswrapper[4865]: I0123 12:05:25.139476 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-szb9h" event={"ID":"3dee20a9-c14d-4a42-afb1-87d126996c56","Type":"ContainerStarted","Data":"aabab20e981a150a139733adbff53f4aa6231b20440051727cbd214182f0f5a1"} Jan 23 12:05:25 crc kubenswrapper[4865]: I0123 12:05:25.139728 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-szb9h" event={"ID":"3dee20a9-c14d-4a42-afb1-87d126996c56","Type":"ContainerStarted","Data":"ec3a869fe840abf0b396bec441ef74528af4d8ec1edfac6fa41cff8974a6c11d"} Jan 23 12:05:26 crc kubenswrapper[4865]: I0123 12:05:26.150419 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-8bjkz" podStartSLOduration=4.150404306 podStartE2EDuration="4.150404306s" podCreationTimestamp="2026-01-23 12:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:05:24.151905471 +0000 UTC m=+768.320977697" watchObservedRunningTime="2026-01-23 12:05:26.150404306 +0000 UTC m=+770.319476522" Jan 23 12:05:26 crc kubenswrapper[4865]: I0123 12:05:26.166093 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-szb9h" event={"ID":"3dee20a9-c14d-4a42-afb1-87d126996c56","Type":"ContainerStarted","Data":"067cfa42e883a0c5d92abf0e2986ebcb9e0bf6bc3812f9de78cb704b5c448bcf"} Jan 23 12:05:26 crc kubenswrapper[4865]: I0123 12:05:26.166703 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-szb9h" Jan 23 12:05:26 crc kubenswrapper[4865]: I0123 12:05:26.193044 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-szb9h" podStartSLOduration=4.19302458 podStartE2EDuration="4.19302458s" podCreationTimestamp="2026-01-23 12:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:05:26.191702368 +0000 UTC m=+770.360774604" watchObservedRunningTime="2026-01-23 12:05:26.19302458 +0000 UTC m=+770.362096806" Jan 23 12:05:32 crc kubenswrapper[4865]: I0123 12:05:32.205361 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerStarted","Data":"54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389"} Jan 23 12:05:32 crc kubenswrapper[4865]: I0123 12:05:32.206576 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:32 crc kubenswrapper[4865]: I0123 12:05:32.208023 4865 generic.go:334] "Generic (PLEG): container finished" podID="9faffae5-73bb-4980-8092-b79a6888476d" containerID="94ed535d7ccba4bb5711f27c1aa26c847558da3e6928be06b8188c212cb7fa9c" exitCode=0 Jan 23 12:05:32 crc kubenswrapper[4865]: I0123 12:05:32.208058 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerDied","Data":"94ed535d7ccba4bb5711f27c1aa26c847558da3e6928be06b8188c212cb7fa9c"} Jan 23 12:05:32 crc kubenswrapper[4865]: I0123 12:05:32.227148 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podStartSLOduration=2.827027701 podStartE2EDuration="10.22712648s" podCreationTimestamp="2026-01-23 12:05:22 +0000 UTC" firstStartedPulling="2026-01-23 12:05:24.043210737 +0000 UTC m=+768.212282963" lastFinishedPulling="2026-01-23 12:05:31.443309516 +0000 UTC m=+775.612381742" observedRunningTime="2026-01-23 12:05:32.221056142 +0000 UTC m=+776.390128368" watchObservedRunningTime="2026-01-23 12:05:32.22712648 +0000 UTC m=+776.396198716" Jan 23 12:05:33 crc kubenswrapper[4865]: I0123 12:05:33.217914 4865 generic.go:334] "Generic (PLEG): container finished" podID="9faffae5-73bb-4980-8092-b79a6888476d" containerID="2a73b0e97e8b62d20b8bc8b9658c1f7018bbf4e152f91b90b38436e7b71b577f" exitCode=0 Jan 23 12:05:33 crc kubenswrapper[4865]: I0123 12:05:33.218014 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerDied","Data":"2a73b0e97e8b62d20b8bc8b9658c1f7018bbf4e152f91b90b38436e7b71b577f"} Jan 23 12:05:33 crc kubenswrapper[4865]: I0123 12:05:33.344185 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:05:34 crc kubenswrapper[4865]: I0123 12:05:34.229135 4865 generic.go:334] "Generic (PLEG): container finished" podID="9faffae5-73bb-4980-8092-b79a6888476d" containerID="5820a00b8f56a0a476e8b7e0ef25802ddf18a36b84b30d603488fcafb5b2e004" exitCode=0 Jan 23 12:05:34 crc kubenswrapper[4865]: I0123 12:05:34.229229 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerDied","Data":"5820a00b8f56a0a476e8b7e0ef25802ddf18a36b84b30d603488fcafb5b2e004"} Jan 23 12:05:36 crc kubenswrapper[4865]: I0123 12:05:36.246181 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"25cbdaa311f868aad40b0bd9b2168af477c73fa85613a3aea33a1bc6ab4c9786"} Jan 23 12:05:36 crc kubenswrapper[4865]: I0123 12:05:36.247972 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"ec12bf4bdacfcf6c78e1359d4719470ee7eb48b8c1ace723ab75295cc13aa70d"} Jan 23 12:05:36 crc kubenswrapper[4865]: I0123 12:05:36.248050 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"d6acb080922eeaa2369550b183fe958e11a57932e047f282f28c4fa5f378419b"} Jan 23 12:05:37 crc kubenswrapper[4865]: I0123 12:05:37.257086 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"51db759d718f56911ea7d9f6f3ec624104c098e9ab36147c579c072b4a041879"} Jan 23 12:05:37 crc kubenswrapper[4865]: I0123 12:05:37.257343 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"45eb48f5816a107e94e672963596f6a9014bc7c1c2fe081d9739af29f320445f"} Jan 23 12:05:37 crc kubenswrapper[4865]: I0123 12:05:37.257358 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"13773ed860364884221cd8101f778f79fb4190cb8583119e333776c0db82e73b"} Jan 23 12:05:37 crc kubenswrapper[4865]: I0123 12:05:37.257512 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:37 crc kubenswrapper[4865]: I0123 12:05:37.281253 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gh89m" podStartSLOduration=7.735678098 podStartE2EDuration="15.28121721s" podCreationTimestamp="2026-01-23 12:05:22 +0000 UTC" firstStartedPulling="2026-01-23 12:05:23.904495509 +0000 UTC m=+768.073567745" lastFinishedPulling="2026-01-23 12:05:31.450034631 +0000 UTC m=+775.619106857" observedRunningTime="2026-01-23 12:05:37.276308969 +0000 UTC m=+781.445381195" watchObservedRunningTime="2026-01-23 12:05:37.28121721 +0000 UTC m=+781.450289436" Jan 23 12:05:38 crc kubenswrapper[4865]: I0123 12:05:38.776544 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:38 crc kubenswrapper[4865]: I0123 12:05:38.811082 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:43 crc kubenswrapper[4865]: I0123 12:05:43.796331 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:05:44 crc kubenswrapper[4865]: I0123 12:05:44.809168 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-szb9h" Jan 23 12:05:47 crc kubenswrapper[4865]: I0123 12:05:47.754155 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-z2c5c"] Jan 23 12:05:47 crc kubenswrapper[4865]: I0123 12:05:47.756931 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z2c5c" Jan 23 12:05:47 crc kubenswrapper[4865]: I0123 12:05:47.761869 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 12:05:47 crc kubenswrapper[4865]: I0123 12:05:47.762241 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 12:05:47 crc kubenswrapper[4865]: I0123 12:05:47.768370 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z2c5c"] Jan 23 12:05:47 crc kubenswrapper[4865]: I0123 12:05:47.814314 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-4dd2r" Jan 23 12:05:47 crc kubenswrapper[4865]: I0123 12:05:47.926186 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9tj5\" (UniqueName: \"kubernetes.io/projected/907860a6-01d5-4a9f-a8f7-592e9b05c46b-kube-api-access-n9tj5\") pod \"openstack-operator-index-z2c5c\" (UID: \"907860a6-01d5-4a9f-a8f7-592e9b05c46b\") " pod="openstack-operators/openstack-operator-index-z2c5c" Jan 23 12:05:48 crc kubenswrapper[4865]: I0123 12:05:48.027280 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9tj5\" (UniqueName: \"kubernetes.io/projected/907860a6-01d5-4a9f-a8f7-592e9b05c46b-kube-api-access-n9tj5\") pod \"openstack-operator-index-z2c5c\" (UID: \"907860a6-01d5-4a9f-a8f7-592e9b05c46b\") " pod="openstack-operators/openstack-operator-index-z2c5c" Jan 23 12:05:48 crc kubenswrapper[4865]: I0123 12:05:48.055884 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9tj5\" (UniqueName: \"kubernetes.io/projected/907860a6-01d5-4a9f-a8f7-592e9b05c46b-kube-api-access-n9tj5\") pod \"openstack-operator-index-z2c5c\" (UID: \"907860a6-01d5-4a9f-a8f7-592e9b05c46b\") " pod="openstack-operators/openstack-operator-index-z2c5c" Jan 23 12:05:48 crc kubenswrapper[4865]: I0123 12:05:48.140019 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z2c5c" Jan 23 12:05:48 crc kubenswrapper[4865]: I0123 12:05:48.576088 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z2c5c"] Jan 23 12:05:48 crc kubenswrapper[4865]: W0123 12:05:48.584960 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod907860a6_01d5_4a9f_a8f7_592e9b05c46b.slice/crio-c967bc8a65316eb88dd375cde73f36bdeac26d66514a8827051cc6666b0c7845 WatchSource:0}: Error finding container c967bc8a65316eb88dd375cde73f36bdeac26d66514a8827051cc6666b0c7845: Status 404 returned error can't find the container with id c967bc8a65316eb88dd375cde73f36bdeac26d66514a8827051cc6666b0c7845 Jan 23 12:05:48 crc kubenswrapper[4865]: I0123 12:05:48.776254 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:05:48 crc kubenswrapper[4865]: I0123 12:05:48.776337 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:05:49 crc kubenswrapper[4865]: I0123 12:05:49.341752 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z2c5c" event={"ID":"907860a6-01d5-4a9f-a8f7-592e9b05c46b","Type":"ContainerStarted","Data":"c967bc8a65316eb88dd375cde73f36bdeac26d66514a8827051cc6666b0c7845"} Jan 23 12:05:51 crc kubenswrapper[4865]: I0123 12:05:51.364871 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z2c5c" event={"ID":"907860a6-01d5-4a9f-a8f7-592e9b05c46b","Type":"ContainerStarted","Data":"9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f"} Jan 23 12:05:51 crc kubenswrapper[4865]: I0123 12:05:51.390198 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-z2c5c" podStartSLOduration=2.234321881 podStartE2EDuration="4.390161841s" podCreationTimestamp="2026-01-23 12:05:47 +0000 UTC" firstStartedPulling="2026-01-23 12:05:48.587102317 +0000 UTC m=+792.756174543" lastFinishedPulling="2026-01-23 12:05:50.742942277 +0000 UTC m=+794.912014503" observedRunningTime="2026-01-23 12:05:51.38412269 +0000 UTC m=+795.553194916" watchObservedRunningTime="2026-01-23 12:05:51.390161841 +0000 UTC m=+795.559234077" Jan 23 12:05:51 crc kubenswrapper[4865]: I0123 12:05:51.896423 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-z2c5c"] Jan 23 12:05:52 crc kubenswrapper[4865]: I0123 12:05:52.503429 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hzwqc"] Jan 23 12:05:52 crc kubenswrapper[4865]: I0123 12:05:52.504886 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:05:52 crc kubenswrapper[4865]: I0123 12:05:52.513198 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hzwqc"] Jan 23 12:05:52 crc kubenswrapper[4865]: I0123 12:05:52.707578 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2pck\" (UniqueName: \"kubernetes.io/projected/c011a295-505e-465c-a8d6-a647d7ad8ed2-kube-api-access-k2pck\") pod \"openstack-operator-index-hzwqc\" (UID: \"c011a295-505e-465c-a8d6-a647d7ad8ed2\") " pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:05:52 crc kubenswrapper[4865]: I0123 12:05:52.809292 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2pck\" (UniqueName: \"kubernetes.io/projected/c011a295-505e-465c-a8d6-a647d7ad8ed2-kube-api-access-k2pck\") pod \"openstack-operator-index-hzwqc\" (UID: \"c011a295-505e-465c-a8d6-a647d7ad8ed2\") " pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:05:52 crc kubenswrapper[4865]: I0123 12:05:52.837444 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2pck\" (UniqueName: \"kubernetes.io/projected/c011a295-505e-465c-a8d6-a647d7ad8ed2-kube-api-access-k2pck\") pod \"openstack-operator-index-hzwqc\" (UID: \"c011a295-505e-465c-a8d6-a647d7ad8ed2\") " pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:05:52 crc kubenswrapper[4865]: I0123 12:05:52.861715 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:05:53 crc kubenswrapper[4865]: I0123 12:05:53.296495 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hzwqc"] Jan 23 12:05:53 crc kubenswrapper[4865]: I0123 12:05:53.378267 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-z2c5c" podUID="907860a6-01d5-4a9f-a8f7-592e9b05c46b" containerName="registry-server" containerID="cri-o://9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f" gracePeriod=2 Jan 23 12:05:53 crc kubenswrapper[4865]: I0123 12:05:53.378519 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hzwqc" event={"ID":"c011a295-505e-465c-a8d6-a647d7ad8ed2","Type":"ContainerStarted","Data":"c162846625800c7d9fd5bf76cd161b168d547b752b0efb48dd53ff57d11061dd"} Jan 23 12:05:53 crc kubenswrapper[4865]: I0123 12:05:53.756002 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z2c5c" Jan 23 12:05:53 crc kubenswrapper[4865]: I0123 12:05:53.778276 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:05:53 crc kubenswrapper[4865]: I0123 12:05:53.829193 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9tj5\" (UniqueName: \"kubernetes.io/projected/907860a6-01d5-4a9f-a8f7-592e9b05c46b-kube-api-access-n9tj5\") pod \"907860a6-01d5-4a9f-a8f7-592e9b05c46b\" (UID: \"907860a6-01d5-4a9f-a8f7-592e9b05c46b\") " Jan 23 12:05:53 crc kubenswrapper[4865]: I0123 12:05:53.834329 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907860a6-01d5-4a9f-a8f7-592e9b05c46b-kube-api-access-n9tj5" (OuterVolumeSpecName: "kube-api-access-n9tj5") pod "907860a6-01d5-4a9f-a8f7-592e9b05c46b" (UID: "907860a6-01d5-4a9f-a8f7-592e9b05c46b"). InnerVolumeSpecName "kube-api-access-n9tj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:05:53 crc kubenswrapper[4865]: I0123 12:05:53.930857 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9tj5\" (UniqueName: \"kubernetes.io/projected/907860a6-01d5-4a9f-a8f7-592e9b05c46b-kube-api-access-n9tj5\") on node \"crc\" DevicePath \"\"" Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.388355 4865 generic.go:334] "Generic (PLEG): container finished" podID="907860a6-01d5-4a9f-a8f7-592e9b05c46b" containerID="9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f" exitCode=0 Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.388453 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z2c5c" event={"ID":"907860a6-01d5-4a9f-a8f7-592e9b05c46b","Type":"ContainerDied","Data":"9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f"} Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.388494 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z2c5c" event={"ID":"907860a6-01d5-4a9f-a8f7-592e9b05c46b","Type":"ContainerDied","Data":"c967bc8a65316eb88dd375cde73f36bdeac26d66514a8827051cc6666b0c7845"} Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.388521 4865 scope.go:117] "RemoveContainer" containerID="9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f" Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.388695 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z2c5c" Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.391335 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hzwqc" event={"ID":"c011a295-505e-465c-a8d6-a647d7ad8ed2","Type":"ContainerStarted","Data":"13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792"} Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.419364 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hzwqc" podStartSLOduration=2.341542726 podStartE2EDuration="2.41933696s" podCreationTimestamp="2026-01-23 12:05:52 +0000 UTC" firstStartedPulling="2026-01-23 12:05:53.315142486 +0000 UTC m=+797.484214752" lastFinishedPulling="2026-01-23 12:05:53.39293675 +0000 UTC m=+797.562008986" observedRunningTime="2026-01-23 12:05:54.412940262 +0000 UTC m=+798.582012528" watchObservedRunningTime="2026-01-23 12:05:54.41933696 +0000 UTC m=+798.588409196" Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.423407 4865 scope.go:117] "RemoveContainer" containerID="9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f" Jan 23 12:05:54 crc kubenswrapper[4865]: E0123 12:05:54.424062 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f\": container with ID starting with 9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f not found: ID does not exist" containerID="9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f" Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.424110 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f"} err="failed to get container status \"9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f\": rpc error: code = NotFound desc = could not find container \"9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f\": container with ID starting with 9eb3e47f34e58f20157f796d139f9a52d1a8af3f3ea03decf40b8e5d5315320f not found: ID does not exist" Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.434626 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-z2c5c"] Jan 23 12:05:54 crc kubenswrapper[4865]: I0123 12:05:54.440309 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-z2c5c"] Jan 23 12:05:56 crc kubenswrapper[4865]: I0123 12:05:56.133194 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907860a6-01d5-4a9f-a8f7-592e9b05c46b" path="/var/lib/kubelet/pods/907860a6-01d5-4a9f-a8f7-592e9b05c46b/volumes" Jan 23 12:06:02 crc kubenswrapper[4865]: I0123 12:06:02.862489 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:06:02 crc kubenswrapper[4865]: I0123 12:06:02.863543 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:06:02 crc kubenswrapper[4865]: I0123 12:06:02.892334 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:06:03 crc kubenswrapper[4865]: I0123 12:06:03.475950 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.133354 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb"] Jan 23 12:06:05 crc kubenswrapper[4865]: E0123 12:06:05.134184 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907860a6-01d5-4a9f-a8f7-592e9b05c46b" containerName="registry-server" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.134271 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="907860a6-01d5-4a9f-a8f7-592e9b05c46b" containerName="registry-server" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.134444 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="907860a6-01d5-4a9f-a8f7-592e9b05c46b" containerName="registry-server" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.135442 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.139903 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-mg9nb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.151186 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb"] Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.178295 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pllck\" (UniqueName: \"kubernetes.io/projected/ae28448b-55a3-42ba-8f4c-02d2cdc473be-kube-api-access-pllck\") pod \"8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.178366 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-bundle\") pod \"8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.178402 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-util\") pod \"8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.279274 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-bundle\") pod \"8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.279319 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-util\") pod \"8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.279394 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pllck\" (UniqueName: \"kubernetes.io/projected/ae28448b-55a3-42ba-8f4c-02d2cdc473be-kube-api-access-pllck\") pod \"8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.279799 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-bundle\") pod \"8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.279844 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-util\") pod \"8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.298250 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pllck\" (UniqueName: \"kubernetes.io/projected/ae28448b-55a3-42ba-8f4c-02d2cdc473be-kube-api-access-pllck\") pod \"8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.462743 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:05 crc kubenswrapper[4865]: I0123 12:06:05.874626 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb"] Jan 23 12:06:06 crc kubenswrapper[4865]: I0123 12:06:06.476430 4865 generic.go:334] "Generic (PLEG): container finished" podID="ae28448b-55a3-42ba-8f4c-02d2cdc473be" containerID="1d2e7fd4122358f9518ed44dd0fff6631461d3f3478620a3684a39410f74cc83" exitCode=0 Jan 23 12:06:06 crc kubenswrapper[4865]: I0123 12:06:06.476542 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" event={"ID":"ae28448b-55a3-42ba-8f4c-02d2cdc473be","Type":"ContainerDied","Data":"1d2e7fd4122358f9518ed44dd0fff6631461d3f3478620a3684a39410f74cc83"} Jan 23 12:06:06 crc kubenswrapper[4865]: I0123 12:06:06.476783 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" event={"ID":"ae28448b-55a3-42ba-8f4c-02d2cdc473be","Type":"ContainerStarted","Data":"7fa01077ff6f4c810e8e6a70cf6b385e926f40c96f53f649fb17f7ecaac9cc3a"} Jan 23 12:06:07 crc kubenswrapper[4865]: I0123 12:06:07.497531 4865 generic.go:334] "Generic (PLEG): container finished" podID="ae28448b-55a3-42ba-8f4c-02d2cdc473be" containerID="5f226c6e1853a57a390a0965aea29f2bc22deba273d76fc35a91b0f0248d6c83" exitCode=0 Jan 23 12:06:07 crc kubenswrapper[4865]: I0123 12:06:07.497733 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" event={"ID":"ae28448b-55a3-42ba-8f4c-02d2cdc473be","Type":"ContainerDied","Data":"5f226c6e1853a57a390a0965aea29f2bc22deba273d76fc35a91b0f0248d6c83"} Jan 23 12:06:08 crc kubenswrapper[4865]: I0123 12:06:08.506258 4865 generic.go:334] "Generic (PLEG): container finished" podID="ae28448b-55a3-42ba-8f4c-02d2cdc473be" containerID="3b82dfc9a04c7ac87d7e37157e09b64d324fd2250bfdfa2d01429a24124f87e8" exitCode=0 Jan 23 12:06:08 crc kubenswrapper[4865]: I0123 12:06:08.506316 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" event={"ID":"ae28448b-55a3-42ba-8f4c-02d2cdc473be","Type":"ContainerDied","Data":"3b82dfc9a04c7ac87d7e37157e09b64d324fd2250bfdfa2d01429a24124f87e8"} Jan 23 12:06:09 crc kubenswrapper[4865]: I0123 12:06:09.780342 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:09 crc kubenswrapper[4865]: I0123 12:06:09.939912 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-util\") pod \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " Jan 23 12:06:09 crc kubenswrapper[4865]: I0123 12:06:09.940002 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllck\" (UniqueName: \"kubernetes.io/projected/ae28448b-55a3-42ba-8f4c-02d2cdc473be-kube-api-access-pllck\") pod \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " Jan 23 12:06:09 crc kubenswrapper[4865]: I0123 12:06:09.940130 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-bundle\") pod \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\" (UID: \"ae28448b-55a3-42ba-8f4c-02d2cdc473be\") " Jan 23 12:06:09 crc kubenswrapper[4865]: I0123 12:06:09.940903 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-bundle" (OuterVolumeSpecName: "bundle") pod "ae28448b-55a3-42ba-8f4c-02d2cdc473be" (UID: "ae28448b-55a3-42ba-8f4c-02d2cdc473be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:06:09 crc kubenswrapper[4865]: I0123 12:06:09.945305 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae28448b-55a3-42ba-8f4c-02d2cdc473be-kube-api-access-pllck" (OuterVolumeSpecName: "kube-api-access-pllck") pod "ae28448b-55a3-42ba-8f4c-02d2cdc473be" (UID: "ae28448b-55a3-42ba-8f4c-02d2cdc473be"). InnerVolumeSpecName "kube-api-access-pllck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:06:09 crc kubenswrapper[4865]: I0123 12:06:09.955639 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-util" (OuterVolumeSpecName: "util") pod "ae28448b-55a3-42ba-8f4c-02d2cdc473be" (UID: "ae28448b-55a3-42ba-8f4c-02d2cdc473be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:06:10 crc kubenswrapper[4865]: I0123 12:06:10.041671 4865 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:06:10 crc kubenswrapper[4865]: I0123 12:06:10.041714 4865 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae28448b-55a3-42ba-8f4c-02d2cdc473be-util\") on node \"crc\" DevicePath \"\"" Jan 23 12:06:10 crc kubenswrapper[4865]: I0123 12:06:10.041728 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pllck\" (UniqueName: \"kubernetes.io/projected/ae28448b-55a3-42ba-8f4c-02d2cdc473be-kube-api-access-pllck\") on node \"crc\" DevicePath \"\"" Jan 23 12:06:10 crc kubenswrapper[4865]: I0123 12:06:10.535329 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" event={"ID":"ae28448b-55a3-42ba-8f4c-02d2cdc473be","Type":"ContainerDied","Data":"7fa01077ff6f4c810e8e6a70cf6b385e926f40c96f53f649fb17f7ecaac9cc3a"} Jan 23 12:06:10 crc kubenswrapper[4865]: I0123 12:06:10.535660 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa01077ff6f4c810e8e6a70cf6b385e926f40c96f53f649fb17f7ecaac9cc3a" Jan 23 12:06:10 crc kubenswrapper[4865]: I0123 12:06:10.535422 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8d926f75bef673e1361e59aba9b612e22294ba1b778ca86f54c8193bb55vjmb" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.398899 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk"] Jan 23 12:06:17 crc kubenswrapper[4865]: E0123 12:06:17.399484 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae28448b-55a3-42ba-8f4c-02d2cdc473be" containerName="extract" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.399500 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae28448b-55a3-42ba-8f4c-02d2cdc473be" containerName="extract" Jan 23 12:06:17 crc kubenswrapper[4865]: E0123 12:06:17.399515 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae28448b-55a3-42ba-8f4c-02d2cdc473be" containerName="util" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.399523 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae28448b-55a3-42ba-8f4c-02d2cdc473be" containerName="util" Jan 23 12:06:17 crc kubenswrapper[4865]: E0123 12:06:17.399546 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae28448b-55a3-42ba-8f4c-02d2cdc473be" containerName="pull" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.399555 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae28448b-55a3-42ba-8f4c-02d2cdc473be" containerName="pull" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.399723 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae28448b-55a3-42ba-8f4c-02d2cdc473be" containerName="extract" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.400182 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.408973 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-nr4zd" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.455070 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk"] Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.547453 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pflzm\" (UniqueName: \"kubernetes.io/projected/840bd4e6-18da-498a-bd3a-d4e80c69ec70-kube-api-access-pflzm\") pod \"openstack-operator-controller-init-6bcd4d8dcc-2sgsk\" (UID: \"840bd4e6-18da-498a-bd3a-d4e80c69ec70\") " pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.648344 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pflzm\" (UniqueName: \"kubernetes.io/projected/840bd4e6-18da-498a-bd3a-d4e80c69ec70-kube-api-access-pflzm\") pod \"openstack-operator-controller-init-6bcd4d8dcc-2sgsk\" (UID: \"840bd4e6-18da-498a-bd3a-d4e80c69ec70\") " pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.680708 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pflzm\" (UniqueName: \"kubernetes.io/projected/840bd4e6-18da-498a-bd3a-d4e80c69ec70-kube-api-access-pflzm\") pod \"openstack-operator-controller-init-6bcd4d8dcc-2sgsk\" (UID: \"840bd4e6-18da-498a-bd3a-d4e80c69ec70\") " pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:06:17 crc kubenswrapper[4865]: I0123 12:06:17.722192 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:06:18 crc kubenswrapper[4865]: I0123 12:06:18.167263 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk"] Jan 23 12:06:18 crc kubenswrapper[4865]: I0123 12:06:18.603857 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" event={"ID":"840bd4e6-18da-498a-bd3a-d4e80c69ec70","Type":"ContainerStarted","Data":"24031dd3a5b1952ed75e8c64241ab23a68ebccef52b0325b0ac44a2bce2507f3"} Jan 23 12:06:18 crc kubenswrapper[4865]: I0123 12:06:18.776142 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:06:18 crc kubenswrapper[4865]: I0123 12:06:18.776202 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:06:22 crc kubenswrapper[4865]: I0123 12:06:22.630797 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" event={"ID":"840bd4e6-18da-498a-bd3a-d4e80c69ec70","Type":"ContainerStarted","Data":"248ff4a5d09fe9269983ef79eb681bb0f4f096314c413f09a1f52a736d0e4913"} Jan 23 12:06:22 crc kubenswrapper[4865]: I0123 12:06:22.631710 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:06:22 crc kubenswrapper[4865]: I0123 12:06:22.660106 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" podStartSLOduration=1.485329873 podStartE2EDuration="5.660090707s" podCreationTimestamp="2026-01-23 12:06:17 +0000 UTC" firstStartedPulling="2026-01-23 12:06:18.194026275 +0000 UTC m=+822.363098501" lastFinishedPulling="2026-01-23 12:06:22.368787109 +0000 UTC m=+826.537859335" observedRunningTime="2026-01-23 12:06:22.659858982 +0000 UTC m=+826.828931218" watchObservedRunningTime="2026-01-23 12:06:22.660090707 +0000 UTC m=+826.829162933" Jan 23 12:06:27 crc kubenswrapper[4865]: I0123 12:06:27.724935 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.893005 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq"] Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.894277 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.896160 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-kmspr" Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.897252 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b"] Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.897874 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.902683 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zwqgn" Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.903259 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq"] Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.936854 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b"] Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.937839 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z"] Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.938566 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.943481 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-wpr76" Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.965193 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67tz4\" (UniqueName: \"kubernetes.io/projected/10627175-8e39-4799-bec7-c0b49b938a29-kube-api-access-67tz4\") pod \"designate-operator-controller-manager-b45d7bf98-4c94z\" (UID: \"10627175-8e39-4799-bec7-c0b49b938a29\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.965271 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qzjr\" (UniqueName: \"kubernetes.io/projected/5fb13a32-67c3-46b1-a0b8-573e941e6c7e-kube-api-access-6qzjr\") pod \"barbican-operator-controller-manager-59dd8b7cbf-nppmq\" (UID: \"5fb13a32-67c3-46b1-a0b8-573e941e6c7e\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.965296 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dbcx\" (UniqueName: \"kubernetes.io/projected/bdf8f14b-af0d-43cc-b624-7dab2879dc4b-kube-api-access-4dbcx\") pod \"cinder-operator-controller-manager-69cf5d4557-9jp5b\" (UID: \"bdf8f14b-af0d-43cc-b624-7dab2879dc4b\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:06:45 crc kubenswrapper[4865]: I0123 12:06:45.971678 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.002415 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.003171 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.006312 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.006569 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-wx66p" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.006937 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.015834 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ppdm5" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.022379 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.029950 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.046852 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.047575 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.052187 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-g7gf5" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.054669 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.055612 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.068010 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.068229 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qbdsl" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.068877 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr9qf\" (UniqueName: \"kubernetes.io/projected/2c3366d9-565f-4601-acbb-b473dcfe126c-kube-api-access-kr9qf\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.068916 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln558\" (UniqueName: \"kubernetes.io/projected/0167f850-ba43-426a-8c56-aa171131e7da-kube-api-access-ln558\") pod \"heat-operator-controller-manager-594c8c9d5d-fsch6\" (UID: \"0167f850-ba43-426a-8c56-aa171131e7da\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.068942 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qk9h\" (UniqueName: \"kubernetes.io/projected/6aca96af-acfa-4c68-a2f4-ed19f08ddc4e-kube-api-access-5qk9h\") pod \"horizon-operator-controller-manager-77d5c5b54f-qftlt\" (UID: \"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.068963 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67tz4\" (UniqueName: \"kubernetes.io/projected/10627175-8e39-4799-bec7-c0b49b938a29-kube-api-access-67tz4\") pod \"designate-operator-controller-manager-b45d7bf98-4c94z\" (UID: \"10627175-8e39-4799-bec7-c0b49b938a29\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.068994 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.069016 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdgb9\" (UniqueName: \"kubernetes.io/projected/da1cf187-8918-46b4-ab33-e8912c9d0dd6-kube-api-access-mdgb9\") pod \"glance-operator-controller-manager-78fdd796fd-8qtnc\" (UID: \"da1cf187-8918-46b4-ab33-e8912c9d0dd6\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.069049 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qzjr\" (UniqueName: \"kubernetes.io/projected/5fb13a32-67c3-46b1-a0b8-573e941e6c7e-kube-api-access-6qzjr\") pod \"barbican-operator-controller-manager-59dd8b7cbf-nppmq\" (UID: \"5fb13a32-67c3-46b1-a0b8-573e941e6c7e\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.069065 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dbcx\" (UniqueName: \"kubernetes.io/projected/bdf8f14b-af0d-43cc-b624-7dab2879dc4b-kube-api-access-4dbcx\") pod \"cinder-operator-controller-manager-69cf5d4557-9jp5b\" (UID: \"bdf8f14b-af0d-43cc-b624-7dab2879dc4b\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.146735 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qzjr\" (UniqueName: \"kubernetes.io/projected/5fb13a32-67c3-46b1-a0b8-573e941e6c7e-kube-api-access-6qzjr\") pod \"barbican-operator-controller-manager-59dd8b7cbf-nppmq\" (UID: \"5fb13a32-67c3-46b1-a0b8-573e941e6c7e\") " pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.148453 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dbcx\" (UniqueName: \"kubernetes.io/projected/bdf8f14b-af0d-43cc-b624-7dab2879dc4b-kube-api-access-4dbcx\") pod \"cinder-operator-controller-manager-69cf5d4557-9jp5b\" (UID: \"bdf8f14b-af0d-43cc-b624-7dab2879dc4b\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.153143 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67tz4\" (UniqueName: \"kubernetes.io/projected/10627175-8e39-4799-bec7-c0b49b938a29-kube-api-access-67tz4\") pod \"designate-operator-controller-manager-b45d7bf98-4c94z\" (UID: \"10627175-8e39-4799-bec7-c0b49b938a29\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.182731 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.188554 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr9qf\" (UniqueName: \"kubernetes.io/projected/2c3366d9-565f-4601-acbb-b473dcfe126c-kube-api-access-kr9qf\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.188880 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln558\" (UniqueName: \"kubernetes.io/projected/0167f850-ba43-426a-8c56-aa171131e7da-kube-api-access-ln558\") pod \"heat-operator-controller-manager-594c8c9d5d-fsch6\" (UID: \"0167f850-ba43-426a-8c56-aa171131e7da\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.188910 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qk9h\" (UniqueName: \"kubernetes.io/projected/6aca96af-acfa-4c68-a2f4-ed19f08ddc4e-kube-api-access-5qk9h\") pod \"horizon-operator-controller-manager-77d5c5b54f-qftlt\" (UID: \"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.189052 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.189080 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdgb9\" (UniqueName: \"kubernetes.io/projected/da1cf187-8918-46b4-ab33-e8912c9d0dd6-kube-api-access-mdgb9\") pod \"glance-operator-controller-manager-78fdd796fd-8qtnc\" (UID: \"da1cf187-8918-46b4-ab33-e8912c9d0dd6\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:06:46 crc kubenswrapper[4865]: E0123 12:06:46.189231 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 12:06:46 crc kubenswrapper[4865]: E0123 12:06:46.189397 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert podName:2c3366d9-565f-4601-acbb-b473dcfe126c nodeName:}" failed. No retries permitted until 2026-01-23 12:06:46.689255344 +0000 UTC m=+850.858327570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert") pod "infra-operator-controller-manager-54ccf4f85d-l6w6d" (UID: "2c3366d9-565f-4601-acbb-b473dcfe126c") : secret "infra-operator-webhook-server-cert" not found Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.215919 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.216914 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.217674 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.223900 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.224327 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.233751 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.234884 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lkz55" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.258467 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln558\" (UniqueName: \"kubernetes.io/projected/0167f850-ba43-426a-8c56-aa171131e7da-kube-api-access-ln558\") pod \"heat-operator-controller-manager-594c8c9d5d-fsch6\" (UID: \"0167f850-ba43-426a-8c56-aa171131e7da\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.258935 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.276612 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr9qf\" (UniqueName: \"kubernetes.io/projected/2c3366d9-565f-4601-acbb-b473dcfe126c-kube-api-access-kr9qf\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.283682 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.285118 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.289136 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qk9h\" (UniqueName: \"kubernetes.io/projected/6aca96af-acfa-4c68-a2f4-ed19f08ddc4e-kube-api-access-5qk9h\") pod \"horizon-operator-controller-manager-77d5c5b54f-qftlt\" (UID: \"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.290271 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbdtc\" (UniqueName: \"kubernetes.io/projected/967c3782-1bce-4145-8244-7650fe19dc22-kube-api-access-nbdtc\") pod \"ironic-operator-controller-manager-69d6c9f5b8-h6dkp\" (UID: \"967c3782-1bce-4145-8244-7650fe19dc22\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.295216 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.297320 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdgb9\" (UniqueName: \"kubernetes.io/projected/da1cf187-8918-46b4-ab33-e8912c9d0dd6-kube-api-access-mdgb9\") pod \"glance-operator-controller-manager-78fdd796fd-8qtnc\" (UID: \"da1cf187-8918-46b4-ab33-e8912c9d0dd6\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.301645 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.305033 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.312252 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-wqr8h" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.312980 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dpkkq" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.323032 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.323837 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.330412 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.331272 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.333086 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-p5xpv" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.337300 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.344417 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jqmpp" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.347649 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.365475 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.380688 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.383078 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.390669 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.391289 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbdtc\" (UniqueName: \"kubernetes.io/projected/967c3782-1bce-4145-8244-7650fe19dc22-kube-api-access-nbdtc\") pod \"ironic-operator-controller-manager-69d6c9f5b8-h6dkp\" (UID: \"967c3782-1bce-4145-8244-7650fe19dc22\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.391332 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgj7k\" (UniqueName: \"kubernetes.io/projected/a9bb243e-e7c3-4f68-be35-d86fa049c570-kube-api-access-jgj7k\") pod \"manila-operator-controller-manager-78c6999f6f-bps6b\" (UID: \"a9bb243e-e7c3-4f68-be35-d86fa049c570\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.391373 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22n2x\" (UniqueName: \"kubernetes.io/projected/d2f4bfa4-63e2-418a-b52a-75d2992af596-kube-api-access-22n2x\") pod \"mariadb-operator-controller-manager-c87fff755-mlm5v\" (UID: \"d2f4bfa4-63e2-418a-b52a-75d2992af596\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.391404 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4csrc\" (UniqueName: \"kubernetes.io/projected/e92ddc14-bdb6-4407-b8a3-047079030166-kube-api-access-4csrc\") pod \"keystone-operator-controller-manager-b8b6d4659-9fl7w\" (UID: \"e92ddc14-bdb6-4407-b8a3-047079030166\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.391421 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwjqt\" (UniqueName: \"kubernetes.io/projected/429b62c2-b748-40b1-b00f-a1b0488fc5d0-kube-api-access-dwjqt\") pod \"neutron-operator-controller-manager-5d8f59fb49-hnv8g\" (UID: \"429b62c2-b748-40b1-b00f-a1b0488fc5d0\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.405638 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.406644 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.408297 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-wzpjd" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.422183 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.434662 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.435452 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.443679 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.449231 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-66xxd" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.457352 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.459233 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbdtc\" (UniqueName: \"kubernetes.io/projected/967c3782-1bce-4145-8244-7650fe19dc22-kube-api-access-nbdtc\") pod \"ironic-operator-controller-manager-69d6c9f5b8-h6dkp\" (UID: \"967c3782-1bce-4145-8244-7650fe19dc22\") " pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.460123 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.479268 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.479552 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-m96k8" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.496417 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.497390 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.498712 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9zqs\" (UniqueName: \"kubernetes.io/projected/4836de1a-4a0e-4d02-af0e-3408b4814ecf-kube-api-access-t9zqs\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.498746 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgj7k\" (UniqueName: \"kubernetes.io/projected/a9bb243e-e7c3-4f68-be35-d86fa049c570-kube-api-access-jgj7k\") pod \"manila-operator-controller-manager-78c6999f6f-bps6b\" (UID: \"a9bb243e-e7c3-4f68-be35-d86fa049c570\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.498781 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.498799 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22n2x\" (UniqueName: \"kubernetes.io/projected/d2f4bfa4-63e2-418a-b52a-75d2992af596-kube-api-access-22n2x\") pod \"mariadb-operator-controller-manager-c87fff755-mlm5v\" (UID: \"d2f4bfa4-63e2-418a-b52a-75d2992af596\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.498832 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84v2s\" (UniqueName: \"kubernetes.io/projected/6d4fbfc8-900e-4c44-a458-039d37a6dd40-kube-api-access-84v2s\") pod \"octavia-operator-controller-manager-7bd9774b6-bqtq9\" (UID: \"6d4fbfc8-900e-4c44-a458-039d37a6dd40\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.498854 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4csrc\" (UniqueName: \"kubernetes.io/projected/e92ddc14-bdb6-4407-b8a3-047079030166-kube-api-access-4csrc\") pod \"keystone-operator-controller-manager-b8b6d4659-9fl7w\" (UID: \"e92ddc14-bdb6-4407-b8a3-047079030166\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.498873 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwjqt\" (UniqueName: \"kubernetes.io/projected/429b62c2-b748-40b1-b00f-a1b0488fc5d0-kube-api-access-dwjqt\") pod \"neutron-operator-controller-manager-5d8f59fb49-hnv8g\" (UID: \"429b62c2-b748-40b1-b00f-a1b0488fc5d0\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.498933 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7wlc\" (UniqueName: \"kubernetes.io/projected/1959a742-ade2-4266-9a93-e96a1b6e3908-kube-api-access-v7wlc\") pod \"nova-operator-controller-manager-6b8bc8d87d-6t8ts\" (UID: \"1959a742-ade2-4266-9a93-e96a1b6e3908\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.505341 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-f86ht" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.514688 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.529184 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwjqt\" (UniqueName: \"kubernetes.io/projected/429b62c2-b748-40b1-b00f-a1b0488fc5d0-kube-api-access-dwjqt\") pod \"neutron-operator-controller-manager-5d8f59fb49-hnv8g\" (UID: \"429b62c2-b748-40b1-b00f-a1b0488fc5d0\") " pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.545775 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.557102 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4csrc\" (UniqueName: \"kubernetes.io/projected/e92ddc14-bdb6-4407-b8a3-047079030166-kube-api-access-4csrc\") pod \"keystone-operator-controller-manager-b8b6d4659-9fl7w\" (UID: \"e92ddc14-bdb6-4407-b8a3-047079030166\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.558256 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.560260 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.572177 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7ncwt" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.573219 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgj7k\" (UniqueName: \"kubernetes.io/projected/a9bb243e-e7c3-4f68-be35-d86fa049c570-kube-api-access-jgj7k\") pod \"manila-operator-controller-manager-78c6999f6f-bps6b\" (UID: \"a9bb243e-e7c3-4f68-be35-d86fa049c570\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.573678 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22n2x\" (UniqueName: \"kubernetes.io/projected/d2f4bfa4-63e2-418a-b52a-75d2992af596-kube-api-access-22n2x\") pod \"mariadb-operator-controller-manager-c87fff755-mlm5v\" (UID: \"d2f4bfa4-63e2-418a-b52a-75d2992af596\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.576855 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.605543 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7wlc\" (UniqueName: \"kubernetes.io/projected/1959a742-ade2-4266-9a93-e96a1b6e3908-kube-api-access-v7wlc\") pod \"nova-operator-controller-manager-6b8bc8d87d-6t8ts\" (UID: \"1959a742-ade2-4266-9a93-e96a1b6e3908\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.605636 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9zqs\" (UniqueName: \"kubernetes.io/projected/4836de1a-4a0e-4d02-af0e-3408b4814ecf-kube-api-access-t9zqs\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.605668 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5kbg\" (UniqueName: \"kubernetes.io/projected/fb9fb53a-b18e-4291-ab1b-83ac2fd78a73-kube-api-access-v5kbg\") pod \"placement-operator-controller-manager-5d646b7d76-7fdbl\" (UID: \"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.605701 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.605765 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svk9r\" (UniqueName: \"kubernetes.io/projected/93194445-a021-4960-ab82-085f13cc959d-kube-api-access-svk9r\") pod \"ovn-operator-controller-manager-55db956ddc-cbz92\" (UID: \"93194445-a021-4960-ab82-085f13cc959d\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.605803 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84v2s\" (UniqueName: \"kubernetes.io/projected/6d4fbfc8-900e-4c44-a458-039d37a6dd40-kube-api-access-84v2s\") pod \"octavia-operator-controller-manager-7bd9774b6-bqtq9\" (UID: \"6d4fbfc8-900e-4c44-a458-039d37a6dd40\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:06:46 crc kubenswrapper[4865]: E0123 12:06:46.606221 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 12:06:46 crc kubenswrapper[4865]: E0123 12:06:46.606272 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert podName:4836de1a-4a0e-4d02-af0e-3408b4814ecf nodeName:}" failed. No retries permitted until 2026-01-23 12:06:47.106256527 +0000 UTC m=+851.275328753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" (UID: "4836de1a-4a0e-4d02-af0e-3408b4814ecf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.610171 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.617884 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.624951 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.641092 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5l6wg" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.665375 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7wlc\" (UniqueName: \"kubernetes.io/projected/1959a742-ade2-4266-9a93-e96a1b6e3908-kube-api-access-v7wlc\") pod \"nova-operator-controller-manager-6b8bc8d87d-6t8ts\" (UID: \"1959a742-ade2-4266-9a93-e96a1b6e3908\") " pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.666962 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84v2s\" (UniqueName: \"kubernetes.io/projected/6d4fbfc8-900e-4c44-a458-039d37a6dd40-kube-api-access-84v2s\") pod \"octavia-operator-controller-manager-7bd9774b6-bqtq9\" (UID: \"6d4fbfc8-900e-4c44-a458-039d37a6dd40\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.672024 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.696814 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9zqs\" (UniqueName: \"kubernetes.io/projected/4836de1a-4a0e-4d02-af0e-3408b4814ecf-kube-api-access-t9zqs\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.699645 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.736813 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.737536 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svk9r\" (UniqueName: \"kubernetes.io/projected/93194445-a021-4960-ab82-085f13cc959d-kube-api-access-svk9r\") pod \"ovn-operator-controller-manager-55db956ddc-cbz92\" (UID: \"93194445-a021-4960-ab82-085f13cc959d\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.737625 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.737692 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5kbg\" (UniqueName: \"kubernetes.io/projected/fb9fb53a-b18e-4291-ab1b-83ac2fd78a73-kube-api-access-v5kbg\") pod \"placement-operator-controller-manager-5d646b7d76-7fdbl\" (UID: \"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:06:46 crc kubenswrapper[4865]: E0123 12:06:46.747477 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 12:06:46 crc kubenswrapper[4865]: E0123 12:06:46.747643 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert podName:2c3366d9-565f-4601-acbb-b473dcfe126c nodeName:}" failed. No retries permitted until 2026-01-23 12:06:47.747624857 +0000 UTC m=+851.916697083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert") pod "infra-operator-controller-manager-54ccf4f85d-l6w6d" (UID: "2c3366d9-565f-4601-acbb-b473dcfe126c") : secret "infra-operator-webhook-server-cert" not found Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.775877 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.806663 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svk9r\" (UniqueName: \"kubernetes.io/projected/93194445-a021-4960-ab82-085f13cc959d-kube-api-access-svk9r\") pod \"ovn-operator-controller-manager-55db956ddc-cbz92\" (UID: \"93194445-a021-4960-ab82-085f13cc959d\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.806742 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.820736 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.844075 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.844109 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.847860 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm269\" (UniqueName: \"kubernetes.io/projected/661fbfd2-7d52-419a-943f-c57854d2306b-kube-api-access-bm269\") pod \"swift-operator-controller-manager-547cbdb99f-zm52l\" (UID: \"661fbfd2-7d52-419a-943f-c57854d2306b\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.850344 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5kbg\" (UniqueName: \"kubernetes.io/projected/fb9fb53a-b18e-4291-ab1b-83ac2fd78a73-kube-api-access-v5kbg\") pod \"placement-operator-controller-manager-5d646b7d76-7fdbl\" (UID: \"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.863671 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hl94t" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.865295 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.868422 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.871901 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.904112 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8czh9" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.908302 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.920670 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.924110 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.933562 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d"] Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.934407 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.935913 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ffmg5" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.939846 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.949294 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljq6\" (UniqueName: \"kubernetes.io/projected/dbfec6f5-80b4-480f-a958-c3107b2776c0-kube-api-access-cljq6\") pod \"telemetry-operator-controller-manager-85cd9769bb-kkkcn\" (UID: \"dbfec6f5-80b4-480f-a958-c3107b2776c0\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.949357 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm269\" (UniqueName: \"kubernetes.io/projected/661fbfd2-7d52-419a-943f-c57854d2306b-kube-api-access-bm269\") pod \"swift-operator-controller-manager-547cbdb99f-zm52l\" (UID: \"661fbfd2-7d52-419a-943f-c57854d2306b\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:06:46 crc kubenswrapper[4865]: I0123 12:06:46.968077 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d"] Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.017513 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh"] Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.018432 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.028553 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.029381 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.035894 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh"] Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.038457 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9b24m" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.050651 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cljq6\" (UniqueName: \"kubernetes.io/projected/dbfec6f5-80b4-480f-a958-c3107b2776c0-kube-api-access-cljq6\") pod \"telemetry-operator-controller-manager-85cd9769bb-kkkcn\" (UID: \"dbfec6f5-80b4-480f-a958-c3107b2776c0\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.062817 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m2dr\" (UniqueName: \"kubernetes.io/projected/50ab40ef-54b8-4392-89ad-6b73c346c225-kube-api-access-6m2dr\") pod \"test-operator-controller-manager-69797bbcbd-qmwk4\" (UID: \"50ab40ef-54b8-4392-89ad-6b73c346c225\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.063730 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq77f\" (UniqueName: \"kubernetes.io/projected/8ef0fdaa-8086-467d-8106-5c6dec532dba-kube-api-access-pq77f\") pod \"watcher-operator-controller-manager-5ffb9c6597-7mv2d\" (UID: \"8ef0fdaa-8086-467d-8106-5c6dec532dba\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.087503 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm269\" (UniqueName: \"kubernetes.io/projected/661fbfd2-7d52-419a-943f-c57854d2306b-kube-api-access-bm269\") pod \"swift-operator-controller-manager-547cbdb99f-zm52l\" (UID: \"661fbfd2-7d52-419a-943f-c57854d2306b\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.141259 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cljq6\" (UniqueName: \"kubernetes.io/projected/dbfec6f5-80b4-480f-a958-c3107b2776c0-kube-api-access-cljq6\") pod \"telemetry-operator-controller-manager-85cd9769bb-kkkcn\" (UID: \"dbfec6f5-80b4-480f-a958-c3107b2776c0\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.164396 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9"] Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.165217 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.167717 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2s5w\" (UniqueName: \"kubernetes.io/projected/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-kube-api-access-d2s5w\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.167793 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m2dr\" (UniqueName: \"kubernetes.io/projected/50ab40ef-54b8-4392-89ad-6b73c346c225-kube-api-access-6m2dr\") pod \"test-operator-controller-manager-69797bbcbd-qmwk4\" (UID: \"50ab40ef-54b8-4392-89ad-6b73c346c225\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.167830 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v98c\" (UniqueName: \"kubernetes.io/projected/8e227974-40b8-4d16-8d5f-961b705a9740-kube-api-access-6v98c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fdkt9\" (UID: \"8e227974-40b8-4d16-8d5f-961b705a9740\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.167896 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.167930 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq77f\" (UniqueName: \"kubernetes.io/projected/8ef0fdaa-8086-467d-8106-5c6dec532dba-kube-api-access-pq77f\") pod \"watcher-operator-controller-manager-5ffb9c6597-7mv2d\" (UID: \"8ef0fdaa-8086-467d-8106-5c6dec532dba\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.167957 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.167974 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.169312 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.169357 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert podName:4836de1a-4a0e-4d02-af0e-3408b4814ecf nodeName:}" failed. No retries permitted until 2026-01-23 12:06:48.169343919 +0000 UTC m=+852.338416145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" (UID: "4836de1a-4a0e-4d02-af0e-3408b4814ecf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.169917 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dzq9p" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.206181 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq77f\" (UniqueName: \"kubernetes.io/projected/8ef0fdaa-8086-467d-8106-5c6dec532dba-kube-api-access-pq77f\") pod \"watcher-operator-controller-manager-5ffb9c6597-7mv2d\" (UID: \"8ef0fdaa-8086-467d-8106-5c6dec532dba\") " pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.244683 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9"] Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.254197 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.269282 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.269323 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.269390 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2s5w\" (UniqueName: \"kubernetes.io/projected/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-kube-api-access-d2s5w\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.269434 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v98c\" (UniqueName: \"kubernetes.io/projected/8e227974-40b8-4d16-8d5f-961b705a9740-kube-api-access-6v98c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fdkt9\" (UID: \"8e227974-40b8-4d16-8d5f-961b705a9740\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.269849 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.269898 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:06:47.769883952 +0000 UTC m=+851.938956178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "webhook-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.269954 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.269978 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:06:47.769971694 +0000 UTC m=+851.939043920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "metrics-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.295288 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.298466 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.299286 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m2dr\" (UniqueName: \"kubernetes.io/projected/50ab40ef-54b8-4392-89ad-6b73c346c225-kube-api-access-6m2dr\") pod \"test-operator-controller-manager-69797bbcbd-qmwk4\" (UID: \"50ab40ef-54b8-4392-89ad-6b73c346c225\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.300844 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2s5w\" (UniqueName: \"kubernetes.io/projected/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-kube-api-access-d2s5w\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.302254 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z"] Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.318271 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v98c\" (UniqueName: \"kubernetes.io/projected/8e227974-40b8-4d16-8d5f-961b705a9740-kube-api-access-6v98c\") pod \"rabbitmq-cluster-operator-manager-668c99d594-fdkt9\" (UID: \"8e227974-40b8-4d16-8d5f-961b705a9740\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.358331 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.579439 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.723109 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq"] Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.727466 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc"] Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.768761 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b"] Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.780357 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.780398 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.780433 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.780509 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.780568 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:06:48.780551308 +0000 UTC m=+852.949623534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "webhook-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.780565 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.780645 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:06:48.78062759 +0000 UTC m=+852.949699816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "metrics-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.780568 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: E0123 12:06:47.780704 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert podName:2c3366d9-565f-4601-acbb-b473dcfe126c nodeName:}" failed. No retries permitted until 2026-01-23 12:06:49.780698091 +0000 UTC m=+853.949770317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert") pod "infra-operator-controller-manager-54ccf4f85d-l6w6d" (UID: "2c3366d9-565f-4601-acbb-b473dcfe126c") : secret "infra-operator-webhook-server-cert" not found Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.881315 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" event={"ID":"5fb13a32-67c3-46b1-a0b8-573e941e6c7e","Type":"ContainerStarted","Data":"cd72957047e581389029f1ebaa4c625dacffcaaf75db55bbec35a0f46f3bc4c4"} Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.888884 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" event={"ID":"10627175-8e39-4799-bec7-c0b49b938a29","Type":"ContainerStarted","Data":"57d9d376b1603081a893173b586cbc19293c93d3c6152ab0617b28c96f69a4b3"} Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.892822 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" event={"ID":"da1cf187-8918-46b4-ab33-e8912c9d0dd6","Type":"ContainerStarted","Data":"a345a3a32cd27efe615d6f2ffc091ed9a02eb47ab85842c0aff9879401d984ba"} Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.897497 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" event={"ID":"bdf8f14b-af0d-43cc-b624-7dab2879dc4b","Type":"ContainerStarted","Data":"2ae747f456384ebcb8590bf2f7a77f5893d043ee14c30516c1808e962c8720be"} Jan 23 12:06:47 crc kubenswrapper[4865]: I0123 12:06:47.929430 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6"] Jan 23 12:06:47 crc kubenswrapper[4865]: W0123 12:06:47.987201 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0167f850_ba43_426a_8c56_aa171131e7da.slice/crio-788668979c6dba52215793101e0065b4a81545fd6775cf5f778445ff21d2f3be WatchSource:0}: Error finding container 788668979c6dba52215793101e0065b4a81545fd6775cf5f778445ff21d2f3be: Status 404 returned error can't find the container with id 788668979c6dba52215793101e0065b4a81545fd6775cf5f778445ff21d2f3be Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.187854 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.188160 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.188224 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert podName:4836de1a-4a0e-4d02-af0e-3408b4814ecf nodeName:}" failed. No retries permitted until 2026-01-23 12:06:50.188205365 +0000 UTC m=+854.357277601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" (UID: "4836de1a-4a0e-4d02-af0e-3408b4814ecf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.314135 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w"] Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.328711 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v"] Jan 23 12:06:48 crc kubenswrapper[4865]: W0123 12:06:48.355542 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2f4bfa4_63e2_418a_b52a_75d2992af596.slice/crio-3b8cc71d695745a4f2543deb18bd0846f6400eae614de640d81feff0ffd3d9bd WatchSource:0}: Error finding container 3b8cc71d695745a4f2543deb18bd0846f6400eae614de640d81feff0ffd3d9bd: Status 404 returned error can't find the container with id 3b8cc71d695745a4f2543deb18bd0846f6400eae614de640d81feff0ffd3d9bd Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.370308 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g"] Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.385607 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt"] Jan 23 12:06:48 crc kubenswrapper[4865]: W0123 12:06:48.386820 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod967c3782_1bce_4145_8244_7650fe19dc22.slice/crio-2288ad6273cbe4f4eaa0726b7a7116cd58a76b752e967ebc7200b496d7dc7658 WatchSource:0}: Error finding container 2288ad6273cbe4f4eaa0726b7a7116cd58a76b752e967ebc7200b496d7dc7658: Status 404 returned error can't find the container with id 2288ad6273cbe4f4eaa0726b7a7116cd58a76b752e967ebc7200b496d7dc7658 Jan 23 12:06:48 crc kubenswrapper[4865]: W0123 12:06:48.393579 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod429b62c2_b748_40b1_b00f_a1b0488fc5d0.slice/crio-bfbdc94e18e61798242a81c249f661f5c0d4aec1ffabfa674a858181c818b3c7 WatchSource:0}: Error finding container bfbdc94e18e61798242a81c249f661f5c0d4aec1ffabfa674a858181c818b3c7: Status 404 returned error can't find the container with id bfbdc94e18e61798242a81c249f661f5c0d4aec1ffabfa674a858181c818b3c7 Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.398769 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp"] Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.713040 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9"] Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.725507 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts"] Jan 23 12:06:48 crc kubenswrapper[4865]: W0123 12:06:48.742047 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1959a742_ade2_4266_9a93_e96a1b6e3908.slice/crio-dd4afc3c4f913b3fe5ebb0ff5c6dabe7b006f18184152e6528bf1946ac82f488 WatchSource:0}: Error finding container dd4afc3c4f913b3fe5ebb0ff5c6dabe7b006f18184152e6528bf1946ac82f488: Status 404 returned error can't find the container with id dd4afc3c4f913b3fe5ebb0ff5c6dabe7b006f18184152e6528bf1946ac82f488 Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.747472 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92"] Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.756391 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn"] Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.771669 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l"] Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.780107 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.780155 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.780227 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.783760 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9"] Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.786065 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d9cd586c30c8b5457d84dc80396ec6c6d5bb6dd4d7eb00e56b29553f41be78a"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.786321 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://6d9cd586c30c8b5457d84dc80396ec6c6d5bb6dd4d7eb00e56b29553f41be78a" gracePeriod=600 Jan 23 12:06:48 crc kubenswrapper[4865]: W0123 12:06:48.788554 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e227974_40b8_4d16_8d5f_961b705a9740.slice/crio-ba9f2146c20e14208e870b524b007ef912648f64c29d4558dbeec4c381b825f1 WatchSource:0}: Error finding container ba9f2146c20e14208e870b524b007ef912648f64c29d4558dbeec4c381b825f1: Status 404 returned error can't find the container with id ba9f2146c20e14208e870b524b007ef912648f64c29d4558dbeec4c381b825f1 Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.792973 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b"] Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.798477 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.798554 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.799189 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.799336 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:06:50.79931814 +0000 UTC m=+854.968390366 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "webhook-server-cert" not found Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.798797 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.799529 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:06:50.799506275 +0000 UTC m=+854.968578581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "metrics-server-cert" not found Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.800378 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d"] Jan 23 12:06:48 crc kubenswrapper[4865]: W0123 12:06:48.810087 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb9fb53a_b18e_4291_ab1b_83ac2fd78a73.slice/crio-b65f904a22cd99602209f6de9fd60788e69aa26028fbecce4e4d41f3b8234030 WatchSource:0}: Error finding container b65f904a22cd99602209f6de9fd60788e69aa26028fbecce4e4d41f3b8234030: Status 404 returned error can't find the container with id b65f904a22cd99602209f6de9fd60788e69aa26028fbecce4e4d41f3b8234030 Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.813412 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl"] Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.821341 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5kbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.822478 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.822919 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4"] Jan 23 12:06:48 crc kubenswrapper[4865]: W0123 12:06:48.827230 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbfec6f5_80b4_480f_a958_c3107b2776c0.slice/crio-7594dc86dcfd8f0dac9524cfb9ba4c0f3ab4daf31bbd2ca5932618579698b21d WatchSource:0}: Error finding container 7594dc86dcfd8f0dac9524cfb9ba4c0f3ab4daf31bbd2ca5932618579698b21d: Status 404 returned error can't find the container with id 7594dc86dcfd8f0dac9524cfb9ba4c0f3ab4daf31bbd2ca5932618579698b21d Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.833533 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cljq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 12:06:48 crc kubenswrapper[4865]: W0123 12:06:48.834455 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ef0fdaa_8086_467d_8106_5c6dec532dba.slice/crio-8d794babd56f8e406db23ea272732512b4c281e9dfe83237726e198c2ea9ffa4 WatchSource:0}: Error finding container 8d794babd56f8e406db23ea272732512b4c281e9dfe83237726e198c2ea9ffa4: Status 404 returned error can't find the container with id 8d794babd56f8e406db23ea272732512b4c281e9dfe83237726e198c2ea9ffa4 Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.837856 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.842571 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6m2dr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-qmwk4_openstack-operators(50ab40ef-54b8-4392-89ad-6b73c346c225): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.843922 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.849390 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pq77f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.851398 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.936893 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" event={"ID":"661fbfd2-7d52-419a-943f-c57854d2306b","Type":"ContainerStarted","Data":"114a7ba6fefb7b811f5e4cf3c2123249f88aa96477cf2451f0a26f9ec526333c"} Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.944067 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" event={"ID":"0167f850-ba43-426a-8c56-aa171131e7da","Type":"ContainerStarted","Data":"788668979c6dba52215793101e0065b4a81545fd6775cf5f778445ff21d2f3be"} Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.953659 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" event={"ID":"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73","Type":"ContainerStarted","Data":"b65f904a22cd99602209f6de9fd60788e69aa26028fbecce4e4d41f3b8234030"} Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.955621 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.956181 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" event={"ID":"429b62c2-b748-40b1-b00f-a1b0488fc5d0","Type":"ContainerStarted","Data":"bfbdc94e18e61798242a81c249f661f5c0d4aec1ffabfa674a858181c818b3c7"} Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.964843 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" event={"ID":"50ab40ef-54b8-4392-89ad-6b73c346c225","Type":"ContainerStarted","Data":"4cc63a411e24d5afdc3b57c393c1216e076d46789bac2f1d6bf43229939b5c79"} Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.966944 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.968496 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" event={"ID":"dbfec6f5-80b4-480f-a958-c3107b2776c0","Type":"ContainerStarted","Data":"7594dc86dcfd8f0dac9524cfb9ba4c0f3ab4daf31bbd2ca5932618579698b21d"} Jan 23 12:06:48 crc kubenswrapper[4865]: E0123 12:06:48.970408 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.971826 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" event={"ID":"e92ddc14-bdb6-4407-b8a3-047079030166","Type":"ContainerStarted","Data":"c3285226232419130e92fc22e63d2e9d9693adb80a847834ca51822407c608d0"} Jan 23 12:06:48 crc kubenswrapper[4865]: I0123 12:06:48.986478 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" event={"ID":"967c3782-1bce-4145-8244-7650fe19dc22","Type":"ContainerStarted","Data":"2288ad6273cbe4f4eaa0726b7a7116cd58a76b752e967ebc7200b496d7dc7658"} Jan 23 12:06:49 crc kubenswrapper[4865]: I0123 12:06:49.018573 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" event={"ID":"a9bb243e-e7c3-4f68-be35-d86fa049c570","Type":"ContainerStarted","Data":"7f4d3755cc335813c846ad77402a471b7de2e36484b74bcd49bfb00b4d921c91"} Jan 23 12:06:49 crc kubenswrapper[4865]: I0123 12:06:49.020146 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" event={"ID":"8e227974-40b8-4d16-8d5f-961b705a9740","Type":"ContainerStarted","Data":"ba9f2146c20e14208e870b524b007ef912648f64c29d4558dbeec4c381b825f1"} Jan 23 12:06:49 crc kubenswrapper[4865]: I0123 12:06:49.021506 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" event={"ID":"8ef0fdaa-8086-467d-8106-5c6dec532dba","Type":"ContainerStarted","Data":"8d794babd56f8e406db23ea272732512b4c281e9dfe83237726e198c2ea9ffa4"} Jan 23 12:06:49 crc kubenswrapper[4865]: E0123 12:06:49.023171 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:06:49 crc kubenswrapper[4865]: I0123 12:06:49.024522 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" event={"ID":"6d4fbfc8-900e-4c44-a458-039d37a6dd40","Type":"ContainerStarted","Data":"e90a4568f51db9b88dd4489bce3d4dfc6b519ce4b674cdac1b59a5fae52a0201"} Jan 23 12:06:49 crc kubenswrapper[4865]: I0123 12:06:49.025698 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" event={"ID":"93194445-a021-4960-ab82-085f13cc959d","Type":"ContainerStarted","Data":"3f586ff3982800b6298ae88eefddc6924ebd17d87075853e0b7af2a91c008495"} Jan 23 12:06:49 crc kubenswrapper[4865]: I0123 12:06:49.026708 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" event={"ID":"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e","Type":"ContainerStarted","Data":"0069ae9c162f25a9fc764c9569d833d157d2e3743e1ccdcd8539909adaa16814"} Jan 23 12:06:49 crc kubenswrapper[4865]: I0123 12:06:49.027566 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" event={"ID":"1959a742-ade2-4266-9a93-e96a1b6e3908","Type":"ContainerStarted","Data":"dd4afc3c4f913b3fe5ebb0ff5c6dabe7b006f18184152e6528bf1946ac82f488"} Jan 23 12:06:49 crc kubenswrapper[4865]: I0123 12:06:49.028817 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" event={"ID":"d2f4bfa4-63e2-418a-b52a-75d2992af596","Type":"ContainerStarted","Data":"3b8cc71d695745a4f2543deb18bd0846f6400eae614de640d81feff0ffd3d9bd"} Jan 23 12:06:49 crc kubenswrapper[4865]: I0123 12:06:49.815425 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:06:49 crc kubenswrapper[4865]: E0123 12:06:49.816105 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 12:06:49 crc kubenswrapper[4865]: E0123 12:06:49.816267 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert podName:2c3366d9-565f-4601-acbb-b473dcfe126c nodeName:}" failed. No retries permitted until 2026-01-23 12:06:53.816227901 +0000 UTC m=+857.985300127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert") pod "infra-operator-controller-manager-54ccf4f85d-l6w6d" (UID: "2c3366d9-565f-4601-acbb-b473dcfe126c") : secret "infra-operator-webhook-server-cert" not found Jan 23 12:06:50 crc kubenswrapper[4865]: I0123 12:06:50.041183 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="6d9cd586c30c8b5457d84dc80396ec6c6d5bb6dd4d7eb00e56b29553f41be78a" exitCode=0 Jan 23 12:06:50 crc kubenswrapper[4865]: I0123 12:06:50.041405 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"6d9cd586c30c8b5457d84dc80396ec6c6d5bb6dd4d7eb00e56b29553f41be78a"} Jan 23 12:06:50 crc kubenswrapper[4865]: I0123 12:06:50.041791 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"345cdb54622a6a314c05af6fc9f3dea4d21afb272e6e5c0d8f125f9458dfa194"} Jan 23 12:06:50 crc kubenswrapper[4865]: I0123 12:06:50.041814 4865 scope.go:117] "RemoveContainer" containerID="ff98bb889080e3c7f19be17161b36fc32daa7506d4ebfb5788d7c8ff79bcc3ed" Jan 23 12:06:50 crc kubenswrapper[4865]: E0123 12:06:50.044206 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:06:50 crc kubenswrapper[4865]: E0123 12:06:50.044687 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:06:50 crc kubenswrapper[4865]: E0123 12:06:50.044713 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:06:50 crc kubenswrapper[4865]: E0123 12:06:50.045006 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" Jan 23 12:06:50 crc kubenswrapper[4865]: I0123 12:06:50.226720 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:06:50 crc kubenswrapper[4865]: E0123 12:06:50.228045 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 12:06:50 crc kubenswrapper[4865]: E0123 12:06:50.228110 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert podName:4836de1a-4a0e-4d02-af0e-3408b4814ecf nodeName:}" failed. No retries permitted until 2026-01-23 12:06:54.228093445 +0000 UTC m=+858.397165671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" (UID: "4836de1a-4a0e-4d02-af0e-3408b4814ecf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 12:06:50 crc kubenswrapper[4865]: I0123 12:06:50.838313 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:50 crc kubenswrapper[4865]: I0123 12:06:50.838483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:50 crc kubenswrapper[4865]: E0123 12:06:50.838442 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 12:06:50 crc kubenswrapper[4865]: E0123 12:06:50.838617 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:06:54.838583107 +0000 UTC m=+859.007655333 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "webhook-server-cert" not found Jan 23 12:06:50 crc kubenswrapper[4865]: E0123 12:06:50.839049 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 12:06:50 crc kubenswrapper[4865]: E0123 12:06:50.839080 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:06:54.839072698 +0000 UTC m=+859.008144914 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "metrics-server-cert" not found Jan 23 12:06:53 crc kubenswrapper[4865]: I0123 12:06:53.891632 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:06:53 crc kubenswrapper[4865]: E0123 12:06:53.891797 4865 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 12:06:53 crc kubenswrapper[4865]: E0123 12:06:53.892525 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert podName:2c3366d9-565f-4601-acbb-b473dcfe126c nodeName:}" failed. No retries permitted until 2026-01-23 12:07:01.89250255 +0000 UTC m=+866.061574776 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert") pod "infra-operator-controller-manager-54ccf4f85d-l6w6d" (UID: "2c3366d9-565f-4601-acbb-b473dcfe126c") : secret "infra-operator-webhook-server-cert" not found Jan 23 12:06:54 crc kubenswrapper[4865]: I0123 12:06:54.297553 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:06:54 crc kubenswrapper[4865]: E0123 12:06:54.297768 4865 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 12:06:54 crc kubenswrapper[4865]: E0123 12:06:54.298026 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert podName:4836de1a-4a0e-4d02-af0e-3408b4814ecf nodeName:}" failed. No retries permitted until 2026-01-23 12:07:02.297999745 +0000 UTC m=+866.467072011 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" (UID: "4836de1a-4a0e-4d02-af0e-3408b4814ecf") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 12:06:54 crc kubenswrapper[4865]: I0123 12:06:54.906646 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:54 crc kubenswrapper[4865]: I0123 12:06:54.906987 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:06:54 crc kubenswrapper[4865]: E0123 12:06:54.906835 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 12:06:54 crc kubenswrapper[4865]: E0123 12:06:54.907054 4865 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 12:06:54 crc kubenswrapper[4865]: E0123 12:06:54.907142 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:07:02.907124777 +0000 UTC m=+867.076197003 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "webhook-server-cert" not found Jan 23 12:06:54 crc kubenswrapper[4865]: E0123 12:06:54.907195 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:07:02.907176208 +0000 UTC m=+867.076248434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "metrics-server-cert" not found Jan 23 12:07:01 crc kubenswrapper[4865]: I0123 12:07:01.920653 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:07:01 crc kubenswrapper[4865]: I0123 12:07:01.927666 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c3366d9-565f-4601-acbb-b473dcfe126c-cert\") pod \"infra-operator-controller-manager-54ccf4f85d-l6w6d\" (UID: \"2c3366d9-565f-4601-acbb-b473dcfe126c\") " pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:07:02 crc kubenswrapper[4865]: I0123 12:07:02.008736 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:07:02 crc kubenswrapper[4865]: I0123 12:07:02.328847 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:07:02 crc kubenswrapper[4865]: I0123 12:07:02.333743 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4836de1a-4a0e-4d02-af0e-3408b4814ecf-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\" (UID: \"4836de1a-4a0e-4d02-af0e-3408b4814ecf\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:07:02 crc kubenswrapper[4865]: I0123 12:07:02.485870 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:07:02 crc kubenswrapper[4865]: I0123 12:07:02.941371 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:07:02 crc kubenswrapper[4865]: I0123 12:07:02.942331 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:07:02 crc kubenswrapper[4865]: E0123 12:07:02.941556 4865 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 12:07:02 crc kubenswrapper[4865]: E0123 12:07:02.942535 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs podName:b2ea2452-dc3b-4b93-a9d4-e562a63111c9 nodeName:}" failed. No retries permitted until 2026-01-23 12:07:18.94251503 +0000 UTC m=+883.111587256 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs") pod "openstack-operator-controller-manager-76c5c47f8f-p49qh" (UID: "b2ea2452-dc3b-4b93-a9d4-e562a63111c9") : secret "webhook-server-cert" not found Jan 23 12:07:02 crc kubenswrapper[4865]: I0123 12:07:02.954030 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-metrics-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:07:06 crc kubenswrapper[4865]: E0123 12:07:06.331806 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4" Jan 23 12:07:06 crc kubenswrapper[4865]: E0123 12:07:06.333174 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwjqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5d8f59fb49-hnv8g_openstack-operators(429b62c2-b748-40b1-b00f-a1b0488fc5d0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:06 crc kubenswrapper[4865]: E0123 12:07:06.334651 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" Jan 23 12:07:07 crc kubenswrapper[4865]: E0123 12:07:07.175310 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b57d65d2a968705b9067192a7cb33bd4a12489db87e1d05de78c076f2062cab4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" Jan 23 12:07:07 crc kubenswrapper[4865]: E0123 12:07:07.675344 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5" Jan 23 12:07:07 crc kubenswrapper[4865]: E0123 12:07:07.675560 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-84v2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:07 crc kubenswrapper[4865]: E0123 12:07:07.677112 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:07:08 crc kubenswrapper[4865]: E0123 12:07:08.183126 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:07:12 crc kubenswrapper[4865]: E0123 12:07:12.478326 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 23 12:07:12 crc kubenswrapper[4865]: E0123 12:07:12.479243 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mdgb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:12 crc kubenswrapper[4865]: E0123 12:07:12.480471 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:07:13 crc kubenswrapper[4865]: E0123 12:07:13.213374 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:07:14 crc kubenswrapper[4865]: E0123 12:07:14.202820 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 23 12:07:14 crc kubenswrapper[4865]: E0123 12:07:14.203816 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-svk9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:14 crc kubenswrapper[4865]: E0123 12:07:14.205121 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:07:14 crc kubenswrapper[4865]: E0123 12:07:14.218713 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:07:16 crc kubenswrapper[4865]: E0123 12:07:16.141008 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 23 12:07:16 crc kubenswrapper[4865]: E0123 12:07:16.141435 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ln558,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:16 crc kubenswrapper[4865]: E0123 12:07:16.142931 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:07:16 crc kubenswrapper[4865]: E0123 12:07:16.238366 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:07:16 crc kubenswrapper[4865]: E0123 12:07:16.664957 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 23 12:07:16 crc kubenswrapper[4865]: E0123 12:07:16.665113 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22n2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:16 crc kubenswrapper[4865]: E0123 12:07:16.667220 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:07:17 crc kubenswrapper[4865]: E0123 12:07:17.245938 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:07:18 crc kubenswrapper[4865]: I0123 12:07:18.985862 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:07:19 crc kubenswrapper[4865]: I0123 12:07:18.991699 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2ea2452-dc3b-4b93-a9d4-e562a63111c9-webhook-certs\") pod \"openstack-operator-controller-manager-76c5c47f8f-p49qh\" (UID: \"b2ea2452-dc3b-4b93-a9d4-e562a63111c9\") " pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:07:19 crc kubenswrapper[4865]: E0123 12:07:19.054698 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30" Jan 23 12:07:19 crc kubenswrapper[4865]: E0123 12:07:19.055138 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nbdtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:19 crc kubenswrapper[4865]: E0123 12:07:19.056308 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:07:19 crc kubenswrapper[4865]: I0123 12:07:19.161059 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:07:19 crc kubenswrapper[4865]: E0123 12:07:19.254774 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:d3c55b59cb192799f8d31196c55c9e9bb3cd38aef7ec51ef257dabf1548e8b30\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:07:24 crc kubenswrapper[4865]: E0123 12:07:24.107698 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 23 12:07:24 crc kubenswrapper[4865]: E0123 12:07:24.109285 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgj7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:24 crc kubenswrapper[4865]: E0123 12:07:24.110527 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:07:24 crc kubenswrapper[4865]: E0123 12:07:24.285351 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:07:26 crc kubenswrapper[4865]: E0123 12:07:26.352954 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 23 12:07:26 crc kubenswrapper[4865]: E0123 12:07:26.353164 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bm269,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:26 crc kubenswrapper[4865]: E0123 12:07:26.354565 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:07:26 crc kubenswrapper[4865]: E0123 12:07:26.968535 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b" Jan 23 12:07:26 crc kubenswrapper[4865]: E0123 12:07:26.968720 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pq77f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:26 crc kubenswrapper[4865]: E0123 12:07:26.970118 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:07:27 crc kubenswrapper[4865]: E0123 12:07:27.305989 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:07:29 crc kubenswrapper[4865]: E0123 12:07:29.899334 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 23 12:07:29 crc kubenswrapper[4865]: E0123 12:07:29.901732 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cljq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:29 crc kubenswrapper[4865]: E0123 12:07:29.902952 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:07:30 crc kubenswrapper[4865]: E0123 12:07:30.504810 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0" Jan 23 12:07:30 crc kubenswrapper[4865]: E0123 12:07:30.504968 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5kbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:30 crc kubenswrapper[4865]: E0123 12:07:30.506036 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:07:31 crc kubenswrapper[4865]: E0123 12:07:31.059952 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 23 12:07:31 crc kubenswrapper[4865]: E0123 12:07:31.060148 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6m2dr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-qmwk4_openstack-operators(50ab40ef-54b8-4392-89ad-6b73c346c225): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:31 crc kubenswrapper[4865]: E0123 12:07:31.061985 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" Jan 23 12:07:34 crc kubenswrapper[4865]: E0123 12:07:34.109202 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831" Jan 23 12:07:34 crc kubenswrapper[4865]: E0123 12:07:34.109710 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v7wlc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:34 crc kubenswrapper[4865]: E0123 12:07:34.111820 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:07:34 crc kubenswrapper[4865]: E0123 12:07:34.347949 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:4e995cfa360a9d595a01b9c0541ab934692f2374203cb5738127dd784f793831\\\"\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:07:34 crc kubenswrapper[4865]: E0123 12:07:34.647615 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 12:07:34 crc kubenswrapper[4865]: E0123 12:07:34.647794 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4csrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:34 crc kubenswrapper[4865]: E0123 12:07:34.649238 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:07:35 crc kubenswrapper[4865]: E0123 12:07:35.764351 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:07:35 crc kubenswrapper[4865]: E0123 12:07:35.791303 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 12:07:35 crc kubenswrapper[4865]: E0123 12:07:35.791548 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6v98c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-fdkt9_openstack-operators(8e227974-40b8-4d16-8d5f-961b705a9740): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:07:35 crc kubenswrapper[4865]: E0123 12:07:35.795726 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" Jan 23 12:07:36 crc kubenswrapper[4865]: I0123 12:07:36.426281 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d"] Jan 23 12:07:36 crc kubenswrapper[4865]: I0123 12:07:36.467362 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7"] Jan 23 12:07:36 crc kubenswrapper[4865]: W0123 12:07:36.556389 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4836de1a_4a0e_4d02_af0e_3408b4814ecf.slice/crio-56c767af6dc6e458a6b9a0d02ff63ffaceda2b622fb07a6004e6b1952ddb26df WatchSource:0}: Error finding container 56c767af6dc6e458a6b9a0d02ff63ffaceda2b622fb07a6004e6b1952ddb26df: Status 404 returned error can't find the container with id 56c767af6dc6e458a6b9a0d02ff63ffaceda2b622fb07a6004e6b1952ddb26df Jan 23 12:07:36 crc kubenswrapper[4865]: I0123 12:07:36.565747 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh"] Jan 23 12:07:36 crc kubenswrapper[4865]: W0123 12:07:36.590874 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2ea2452_dc3b_4b93_a9d4_e562a63111c9.slice/crio-e9b4fb2c0d7336a6c253cb356d1455d9282f3cb4ebd2b807f8487ecfbd49c2e4 WatchSource:0}: Error finding container e9b4fb2c0d7336a6c253cb356d1455d9282f3cb4ebd2b807f8487ecfbd49c2e4: Status 404 returned error can't find the container with id e9b4fb2c0d7336a6c253cb356d1455d9282f3cb4ebd2b807f8487ecfbd49c2e4 Jan 23 12:07:36 crc kubenswrapper[4865]: I0123 12:07:36.770636 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" event={"ID":"b2ea2452-dc3b-4b93-a9d4-e562a63111c9","Type":"ContainerStarted","Data":"e9b4fb2c0d7336a6c253cb356d1455d9282f3cb4ebd2b807f8487ecfbd49c2e4"} Jan 23 12:07:36 crc kubenswrapper[4865]: I0123 12:07:36.774856 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" event={"ID":"4836de1a-4a0e-4d02-af0e-3408b4814ecf","Type":"ContainerStarted","Data":"56c767af6dc6e458a6b9a0d02ff63ffaceda2b622fb07a6004e6b1952ddb26df"} Jan 23 12:07:36 crc kubenswrapper[4865]: I0123 12:07:36.777720 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" event={"ID":"2c3366d9-565f-4601-acbb-b473dcfe126c","Type":"ContainerStarted","Data":"f96ac7eed5f15d83035952ea4911340dee63ec4a80dae46196d2a7de79a01f8a"} Jan 23 12:07:36 crc kubenswrapper[4865]: E0123 12:07:36.781045 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.786141 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" event={"ID":"429b62c2-b748-40b1-b00f-a1b0488fc5d0","Type":"ContainerStarted","Data":"bd719900c8142a7d20c2f2d0218496dbcd37cde9dab823d7260847f6749c0bcb"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.787125 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.789511 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" event={"ID":"bdf8f14b-af0d-43cc-b624-7dab2879dc4b","Type":"ContainerStarted","Data":"cc196a7d0a8483e448852dd3080814eafacc01c4fa3eef717a29e19532163b8f"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.790307 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.814848 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podStartSLOduration=4.271081828 podStartE2EDuration="51.814832849s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.402477595 +0000 UTC m=+852.571549821" lastFinishedPulling="2026-01-23 12:07:35.946228616 +0000 UTC m=+900.115300842" observedRunningTime="2026-01-23 12:07:37.810440361 +0000 UTC m=+901.979512587" watchObservedRunningTime="2026-01-23 12:07:37.814832849 +0000 UTC m=+901.983905075" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.816523 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" event={"ID":"5fb13a32-67c3-46b1-a0b8-573e941e6c7e","Type":"ContainerStarted","Data":"05c32a9f69fa45c4c849c2c0593634a1d358994f1d3669db97162d3139e34baf"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.817252 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.828034 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" event={"ID":"10627175-8e39-4799-bec7-c0b49b938a29","Type":"ContainerStarted","Data":"1fdbf97e657e0bfd89c2f730d3b5c9a07d8e976682f4a06188f4ac6b2e76428f"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.828190 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.829546 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" event={"ID":"93194445-a021-4960-ab82-085f13cc959d","Type":"ContainerStarted","Data":"1b9b6be821f701ac56b53a484a353bea5212b6f02ef587724d911e861b2fc97c"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.830178 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.837986 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" event={"ID":"da1cf187-8918-46b4-ab33-e8912c9d0dd6","Type":"ContainerStarted","Data":"63205f38181e8e7e4b899f35881b81bd6c72eba848992c5ee8006e2f0700a70e"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.838241 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.839424 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" event={"ID":"967c3782-1bce-4145-8244-7650fe19dc22","Type":"ContainerStarted","Data":"76f1ecd5b0730a0e64ce51eb0d79c203a16172b032a4b3c0ff734fdda3df422e"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.839621 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.840736 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" event={"ID":"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e","Type":"ContainerStarted","Data":"476d6bdbc43b8e01fb3e9f46b5fac5875299d36c4c9a12328874015faac89f4f"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.841099 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.846319 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podStartSLOduration=14.260635736 podStartE2EDuration="52.846302499s" podCreationTimestamp="2026-01-23 12:06:45 +0000 UTC" firstStartedPulling="2026-01-23 12:06:47.806729495 +0000 UTC m=+851.975801721" lastFinishedPulling="2026-01-23 12:07:26.392396258 +0000 UTC m=+890.561468484" observedRunningTime="2026-01-23 12:07:37.84142361 +0000 UTC m=+902.010495826" watchObservedRunningTime="2026-01-23 12:07:37.846302499 +0000 UTC m=+902.015374725" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.857728 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" event={"ID":"0167f850-ba43-426a-8c56-aa171131e7da","Type":"ContainerStarted","Data":"c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.858302 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.862740 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" event={"ID":"b2ea2452-dc3b-4b93-a9d4-e562a63111c9","Type":"ContainerStarted","Data":"c51c964c06647f878163c7193cda0d69f17f715564a8f339956514f2b970af5a"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.863244 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.866149 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" event={"ID":"6d4fbfc8-900e-4c44-a458-039d37a6dd40","Type":"ContainerStarted","Data":"34eabd6c502550b118ebbab06e0e826b6e3ea3a716d028a059c8e0fdcc47a0d5"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.866612 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.870872 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" event={"ID":"d2f4bfa4-63e2-418a-b52a-75d2992af596","Type":"ContainerStarted","Data":"efa85f7f325947f3c6e17fa6b4b0e0f0e4613a29c14fa6a93c768879ca7375db"} Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.871242 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.884091 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podStartSLOduration=9.2205538 podStartE2EDuration="51.884070354s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.388993922 +0000 UTC m=+852.558066148" lastFinishedPulling="2026-01-23 12:07:31.052510476 +0000 UTC m=+895.221582702" observedRunningTime="2026-01-23 12:07:37.879002301 +0000 UTC m=+902.048074527" watchObservedRunningTime="2026-01-23 12:07:37.884070354 +0000 UTC m=+902.053142580" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.908900 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podStartSLOduration=4.829577848 podStartE2EDuration="52.908886692s" podCreationTimestamp="2026-01-23 12:06:45 +0000 UTC" firstStartedPulling="2026-01-23 12:06:47.788654705 +0000 UTC m=+851.957726931" lastFinishedPulling="2026-01-23 12:07:35.867963549 +0000 UTC m=+900.037035775" observedRunningTime="2026-01-23 12:07:37.904955456 +0000 UTC m=+902.074027692" watchObservedRunningTime="2026-01-23 12:07:37.908886692 +0000 UTC m=+902.077958918" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.978751 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podStartSLOduration=4.176305818 podStartE2EDuration="51.978732913s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.39405321 +0000 UTC m=+852.563125426" lastFinishedPulling="2026-01-23 12:07:36.196480295 +0000 UTC m=+900.365552521" observedRunningTime="2026-01-23 12:07:37.934366216 +0000 UTC m=+902.103438442" watchObservedRunningTime="2026-01-23 12:07:37.978732913 +0000 UTC m=+902.147805129" Jan 23 12:07:37 crc kubenswrapper[4865]: I0123 12:07:37.979265 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podStartSLOduration=4.571283714 podStartE2EDuration="51.979259315s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.793293611 +0000 UTC m=+852.962365827" lastFinishedPulling="2026-01-23 12:07:36.201269202 +0000 UTC m=+900.370341428" observedRunningTime="2026-01-23 12:07:37.975910934 +0000 UTC m=+902.144983160" watchObservedRunningTime="2026-01-23 12:07:37.979259315 +0000 UTC m=+902.148331541" Jan 23 12:07:38 crc kubenswrapper[4865]: I0123 12:07:38.016131 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podStartSLOduration=14.45896519 podStartE2EDuration="53.016114999s" podCreationTimestamp="2026-01-23 12:06:45 +0000 UTC" firstStartedPulling="2026-01-23 12:06:47.779728909 +0000 UTC m=+851.948801135" lastFinishedPulling="2026-01-23 12:07:26.336878728 +0000 UTC m=+890.505950944" observedRunningTime="2026-01-23 12:07:38.0125073 +0000 UTC m=+902.181579536" watchObservedRunningTime="2026-01-23 12:07:38.016114999 +0000 UTC m=+902.185187225" Jan 23 12:07:38 crc kubenswrapper[4865]: I0123 12:07:38.084768 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podStartSLOduration=10.032799919 podStartE2EDuration="53.084745939s" podCreationTimestamp="2026-01-23 12:06:45 +0000 UTC" firstStartedPulling="2026-01-23 12:06:47.447126673 +0000 UTC m=+851.616198899" lastFinishedPulling="2026-01-23 12:07:30.499072693 +0000 UTC m=+894.668144919" observedRunningTime="2026-01-23 12:07:38.075697797 +0000 UTC m=+902.244770043" watchObservedRunningTime="2026-01-23 12:07:38.084745939 +0000 UTC m=+902.253818165" Jan 23 12:07:38 crc kubenswrapper[4865]: I0123 12:07:38.240329 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podStartSLOduration=4.409626487 podStartE2EDuration="52.240309739s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.3583184 +0000 UTC m=+852.527390626" lastFinishedPulling="2026-01-23 12:07:36.189001652 +0000 UTC m=+900.358073878" observedRunningTime="2026-01-23 12:07:38.195294037 +0000 UTC m=+902.364366263" watchObservedRunningTime="2026-01-23 12:07:38.240309739 +0000 UTC m=+902.409381965" Jan 23 12:07:38 crc kubenswrapper[4865]: I0123 12:07:38.397636 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podStartSLOduration=52.397617591 podStartE2EDuration="52.397617591s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:07:38.326067829 +0000 UTC m=+902.495140065" watchObservedRunningTime="2026-01-23 12:07:38.397617591 +0000 UTC m=+902.566689827" Jan 23 12:07:38 crc kubenswrapper[4865]: I0123 12:07:38.453948 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podStartSLOduration=5.327134748 podStartE2EDuration="52.45393205s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.73720858 +0000 UTC m=+852.906280806" lastFinishedPulling="2026-01-23 12:07:35.864005882 +0000 UTC m=+900.033078108" observedRunningTime="2026-01-23 12:07:38.452591717 +0000 UTC m=+902.621663943" watchObservedRunningTime="2026-01-23 12:07:38.45393205 +0000 UTC m=+902.623004276" Jan 23 12:07:38 crc kubenswrapper[4865]: I0123 12:07:38.457454 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podStartSLOduration=5.258024473 podStartE2EDuration="53.457438417s" podCreationTimestamp="2026-01-23 12:06:45 +0000 UTC" firstStartedPulling="2026-01-23 12:06:47.996489647 +0000 UTC m=+852.165561873" lastFinishedPulling="2026-01-23 12:07:36.195903591 +0000 UTC m=+900.364975817" observedRunningTime="2026-01-23 12:07:38.40613611 +0000 UTC m=+902.575208336" watchObservedRunningTime="2026-01-23 12:07:38.457438417 +0000 UTC m=+902.626510643" Jan 23 12:07:39 crc kubenswrapper[4865]: I0123 12:07:39.120215 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 12:07:40 crc kubenswrapper[4865]: E0123 12:07:40.121698 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:2d6d13b3c28e45c6bec980b8808dda8da4723ae87e66d04f53d52c3b3c51612b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:07:40 crc kubenswrapper[4865]: I0123 12:07:40.897095 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" event={"ID":"a9bb243e-e7c3-4f68-be35-d86fa049c570","Type":"ContainerStarted","Data":"d16a099e596d85c91fd0fa1d94c0861d76e76c4983032b1c5e97e173ecc3c6c4"} Jan 23 12:07:40 crc kubenswrapper[4865]: I0123 12:07:40.897560 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:07:40 crc kubenswrapper[4865]: I0123 12:07:40.899721 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" event={"ID":"661fbfd2-7d52-419a-943f-c57854d2306b","Type":"ContainerStarted","Data":"2504392e494c1ef358cfb124eb480bdbf70a7733b9f7b625220f52033a353160"} Jan 23 12:07:40 crc kubenswrapper[4865]: I0123 12:07:40.899995 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:07:40 crc kubenswrapper[4865]: I0123 12:07:40.919916 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podStartSLOduration=4.084104837 podStartE2EDuration="54.919893342s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.8199935 +0000 UTC m=+852.989065726" lastFinishedPulling="2026-01-23 12:07:39.655782005 +0000 UTC m=+903.824854231" observedRunningTime="2026-01-23 12:07:40.918310304 +0000 UTC m=+905.087382530" watchObservedRunningTime="2026-01-23 12:07:40.919893342 +0000 UTC m=+905.088965568" Jan 23 12:07:40 crc kubenswrapper[4865]: I0123 12:07:40.941101 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podStartSLOduration=4.150106638 podStartE2EDuration="54.941086622s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.802682049 +0000 UTC m=+852.971754275" lastFinishedPulling="2026-01-23 12:07:39.593662043 +0000 UTC m=+903.762734259" observedRunningTime="2026-01-23 12:07:40.939456842 +0000 UTC m=+905.108529078" watchObservedRunningTime="2026-01-23 12:07:40.941086622 +0000 UTC m=+905.110158848" Jan 23 12:07:42 crc kubenswrapper[4865]: E0123 12:07:42.120330 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:07:43 crc kubenswrapper[4865]: E0123 12:07:43.119407 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.796082 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9k7jx"] Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.798115 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.806112 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r99wg\" (UniqueName: \"kubernetes.io/projected/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-kube-api-access-r99wg\") pod \"community-operators-9k7jx\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.806302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-catalog-content\") pod \"community-operators-9k7jx\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.806447 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-utilities\") pod \"community-operators-9k7jx\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.812021 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k7jx"] Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.907320 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r99wg\" (UniqueName: \"kubernetes.io/projected/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-kube-api-access-r99wg\") pod \"community-operators-9k7jx\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.907415 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-catalog-content\") pod \"community-operators-9k7jx\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.907475 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-utilities\") pod \"community-operators-9k7jx\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.908241 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-utilities\") pod \"community-operators-9k7jx\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.908280 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-catalog-content\") pod \"community-operators-9k7jx\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.921462 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" event={"ID":"4836de1a-4a0e-4d02-af0e-3408b4814ecf","Type":"ContainerStarted","Data":"5286c16ec9fce398db9582fe2fc7bb61df7b87e42ac85b0d231655c2783a9fa6"} Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.922524 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.924219 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" event={"ID":"2c3366d9-565f-4601-acbb-b473dcfe126c","Type":"ContainerStarted","Data":"e3fe3b1865710694ccdd89df2ca4de17a4db373f4f67811172ced80874644711"} Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.924286 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.937626 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r99wg\" (UniqueName: \"kubernetes.io/projected/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-kube-api-access-r99wg\") pod \"community-operators-9k7jx\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.970717 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" podStartSLOduration=51.241848546 podStartE2EDuration="57.970698598s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:07:36.572143165 +0000 UTC m=+900.741215381" lastFinishedPulling="2026-01-23 12:07:43.300993207 +0000 UTC m=+907.470065433" observedRunningTime="2026-01-23 12:07:43.964941807 +0000 UTC m=+908.134014033" watchObservedRunningTime="2026-01-23 12:07:43.970698598 +0000 UTC m=+908.139770824" Jan 23 12:07:43 crc kubenswrapper[4865]: I0123 12:07:43.996770 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podStartSLOduration=51.238352691 podStartE2EDuration="57.996748927s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:07:36.540848749 +0000 UTC m=+900.709920995" lastFinishedPulling="2026-01-23 12:07:43.299245005 +0000 UTC m=+907.468317231" observedRunningTime="2026-01-23 12:07:43.992883752 +0000 UTC m=+908.161955978" watchObservedRunningTime="2026-01-23 12:07:43.996748927 +0000 UTC m=+908.165821153" Jan 23 12:07:44 crc kubenswrapper[4865]: I0123 12:07:44.115079 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:44 crc kubenswrapper[4865]: E0123 12:07:44.134569 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" Jan 23 12:07:44 crc kubenswrapper[4865]: I0123 12:07:44.474414 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k7jx"] Jan 23 12:07:44 crc kubenswrapper[4865]: W0123 12:07:44.484855 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2c9a47c_68c5_47c2_b8f5_9892163cb89b.slice/crio-1bf6fa22a8ee93cfc737ee9782c05b33df77bd989e14cb7b9db13def11841723 WatchSource:0}: Error finding container 1bf6fa22a8ee93cfc737ee9782c05b33df77bd989e14cb7b9db13def11841723: Status 404 returned error can't find the container with id 1bf6fa22a8ee93cfc737ee9782c05b33df77bd989e14cb7b9db13def11841723 Jan 23 12:07:44 crc kubenswrapper[4865]: I0123 12:07:44.930932 4865 generic.go:334] "Generic (PLEG): container finished" podID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerID="3f0e5989262250616efd1bd969a3da0d3bc92a854ceff9361c6806463a0ccb04" exitCode=0 Jan 23 12:07:44 crc kubenswrapper[4865]: I0123 12:07:44.931045 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7jx" event={"ID":"a2c9a47c-68c5-47c2-b8f5-9892163cb89b","Type":"ContainerDied","Data":"3f0e5989262250616efd1bd969a3da0d3bc92a854ceff9361c6806463a0ccb04"} Jan 23 12:07:44 crc kubenswrapper[4865]: I0123 12:07:44.931281 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7jx" event={"ID":"a2c9a47c-68c5-47c2-b8f5-9892163cb89b","Type":"ContainerStarted","Data":"1bf6fa22a8ee93cfc737ee9782c05b33df77bd989e14cb7b9db13def11841723"} Jan 23 12:07:45 crc kubenswrapper[4865]: I0123 12:07:45.940219 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7jx" event={"ID":"a2c9a47c-68c5-47c2-b8f5-9892163cb89b","Type":"ContainerStarted","Data":"6009b9179d32fcb39578dcce9304ca02d07f42dbd2470f12d74bfb4c8e52a809"} Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.219998 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.230465 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.262020 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.340834 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.353822 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.391876 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.625875 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.759944 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.782484 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.823397 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.887007 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.932147 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.947880 4865 generic.go:334] "Generic (PLEG): container finished" podID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerID="6009b9179d32fcb39578dcce9304ca02d07f42dbd2470f12d74bfb4c8e52a809" exitCode=0 Jan 23 12:07:46 crc kubenswrapper[4865]: I0123 12:07:46.947950 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7jx" event={"ID":"a2c9a47c-68c5-47c2-b8f5-9892163cb89b","Type":"ContainerDied","Data":"6009b9179d32fcb39578dcce9304ca02d07f42dbd2470f12d74bfb4c8e52a809"} Jan 23 12:07:47 crc kubenswrapper[4865]: I0123 12:07:47.301879 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:07:47 crc kubenswrapper[4865]: I0123 12:07:47.956203 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7jx" event={"ID":"a2c9a47c-68c5-47c2-b8f5-9892163cb89b","Type":"ContainerStarted","Data":"000c4e8db3e3a13dc6856c27616241162d3e6a7f29d22d51e7891a12322dba0c"} Jan 23 12:07:47 crc kubenswrapper[4865]: I0123 12:07:47.958638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" event={"ID":"1959a742-ade2-4266-9a93-e96a1b6e3908","Type":"ContainerStarted","Data":"34c12403002230bf2149bbd73d264e2d87708fb1feba635b4fb8637cfcefe7d5"} Jan 23 12:07:47 crc kubenswrapper[4865]: I0123 12:07:47.959202 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:07:47 crc kubenswrapper[4865]: I0123 12:07:47.981853 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9k7jx" podStartSLOduration=2.223421398 podStartE2EDuration="4.981835053s" podCreationTimestamp="2026-01-23 12:07:43 +0000 UTC" firstStartedPulling="2026-01-23 12:07:44.932838132 +0000 UTC m=+909.101910358" lastFinishedPulling="2026-01-23 12:07:47.691251787 +0000 UTC m=+911.860324013" observedRunningTime="2026-01-23 12:07:47.976516972 +0000 UTC m=+912.145589198" watchObservedRunningTime="2026-01-23 12:07:47.981835053 +0000 UTC m=+912.150907279" Jan 23 12:07:47 crc kubenswrapper[4865]: I0123 12:07:47.998408 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podStartSLOduration=3.174624881 podStartE2EDuration="1m1.998388709s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.761240387 +0000 UTC m=+852.930312613" lastFinishedPulling="2026-01-23 12:07:47.585004215 +0000 UTC m=+911.754076441" observedRunningTime="2026-01-23 12:07:47.995757484 +0000 UTC m=+912.164829710" watchObservedRunningTime="2026-01-23 12:07:47.998388709 +0000 UTC m=+912.167460925" Jan 23 12:07:48 crc kubenswrapper[4865]: I0123 12:07:48.967031 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" event={"ID":"e92ddc14-bdb6-4407-b8a3-047079030166","Type":"ContainerStarted","Data":"593f87f62b3ccdf0be76949bdac5a423993e1d8217741c16ed8d4bfe28a7e56c"} Jan 23 12:07:48 crc kubenswrapper[4865]: I0123 12:07:48.967716 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:07:48 crc kubenswrapper[4865]: I0123 12:07:48.991467 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podStartSLOduration=2.718493467 podStartE2EDuration="1m2.99145165s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.334687382 +0000 UTC m=+852.503759608" lastFinishedPulling="2026-01-23 12:07:48.607645565 +0000 UTC m=+912.776717791" observedRunningTime="2026-01-23 12:07:48.987630377 +0000 UTC m=+913.156702613" watchObservedRunningTime="2026-01-23 12:07:48.99145165 +0000 UTC m=+913.160523876" Jan 23 12:07:49 crc kubenswrapper[4865]: I0123 12:07:49.170237 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:07:49 crc kubenswrapper[4865]: I0123 12:07:49.973427 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" event={"ID":"8e227974-40b8-4d16-8d5f-961b705a9740","Type":"ContainerStarted","Data":"0b2a7803942b15c05aaa94d320090897efd8100e0a4bcd07a1a0e623a23a3516"} Jan 23 12:07:50 crc kubenswrapper[4865]: I0123 12:07:50.005913 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" podStartSLOduration=2.047200615 podStartE2EDuration="1m3.005895325s" podCreationTimestamp="2026-01-23 12:06:47 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.803811474 +0000 UTC m=+852.972883700" lastFinishedPulling="2026-01-23 12:07:49.762506184 +0000 UTC m=+913.931578410" observedRunningTime="2026-01-23 12:07:50.001951478 +0000 UTC m=+914.171023704" watchObservedRunningTime="2026-01-23 12:07:50.005895325 +0000 UTC m=+914.174967551" Jan 23 12:07:52 crc kubenswrapper[4865]: I0123 12:07:52.015179 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:07:52 crc kubenswrapper[4865]: I0123 12:07:52.492661 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:07:54 crc kubenswrapper[4865]: I0123 12:07:54.115734 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:54 crc kubenswrapper[4865]: I0123 12:07:54.115785 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:54 crc kubenswrapper[4865]: I0123 12:07:54.159944 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:55 crc kubenswrapper[4865]: I0123 12:07:55.040539 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:55 crc kubenswrapper[4865]: I0123 12:07:55.088721 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k7jx"] Jan 23 12:07:56 crc kubenswrapper[4865]: I0123 12:07:56.006579 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" event={"ID":"8ef0fdaa-8086-467d-8106-5c6dec532dba","Type":"ContainerStarted","Data":"a2cc111c5b050a0cea0b3665386dcd21df0b26072f8ef117916e8082c8b01f56"} Jan 23 12:07:56 crc kubenswrapper[4865]: I0123 12:07:56.007169 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:07:56 crc kubenswrapper[4865]: I0123 12:07:56.152014 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podStartSLOduration=4.190480289 podStartE2EDuration="1m10.151996791s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.849265119 +0000 UTC m=+853.018337345" lastFinishedPulling="2026-01-23 12:07:54.810781611 +0000 UTC m=+918.979853847" observedRunningTime="2026-01-23 12:07:56.026987534 +0000 UTC m=+920.196059760" watchObservedRunningTime="2026-01-23 12:07:56.151996791 +0000 UTC m=+920.321069007" Jan 23 12:07:56 crc kubenswrapper[4865]: I0123 12:07:56.702906 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:07:56 crc kubenswrapper[4865]: I0123 12:07:56.851212 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:07:57 crc kubenswrapper[4865]: I0123 12:07:57.014369 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" event={"ID":"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73","Type":"ContainerStarted","Data":"ce6ae7c2846a936cf92acff3471ab484efba821c99400e231c47bf24e176f43e"} Jan 23 12:07:57 crc kubenswrapper[4865]: I0123 12:07:57.014549 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9k7jx" podUID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerName="registry-server" containerID="cri-o://000c4e8db3e3a13dc6856c27616241162d3e6a7f29d22d51e7891a12322dba0c" gracePeriod=2 Jan 23 12:07:57 crc kubenswrapper[4865]: I0123 12:07:57.036787 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podStartSLOduration=3.920169797 podStartE2EDuration="1m11.036768495s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.821182468 +0000 UTC m=+852.990254694" lastFinishedPulling="2026-01-23 12:07:55.937781166 +0000 UTC m=+920.106853392" observedRunningTime="2026-01-23 12:07:57.031460525 +0000 UTC m=+921.200532751" watchObservedRunningTime="2026-01-23 12:07:57.036768495 +0000 UTC m=+921.205840721" Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.071666 4865 generic.go:334] "Generic (PLEG): container finished" podID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerID="000c4e8db3e3a13dc6856c27616241162d3e6a7f29d22d51e7891a12322dba0c" exitCode=0 Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.071820 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7jx" event={"ID":"a2c9a47c-68c5-47c2-b8f5-9892163cb89b","Type":"ContainerDied","Data":"000c4e8db3e3a13dc6856c27616241162d3e6a7f29d22d51e7891a12322dba0c"} Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.252976 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.285468 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r99wg\" (UniqueName: \"kubernetes.io/projected/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-kube-api-access-r99wg\") pod \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.285823 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-catalog-content\") pod \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.285956 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-utilities\") pod \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\" (UID: \"a2c9a47c-68c5-47c2-b8f5-9892163cb89b\") " Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.286481 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-utilities" (OuterVolumeSpecName: "utilities") pod "a2c9a47c-68c5-47c2-b8f5-9892163cb89b" (UID: "a2c9a47c-68c5-47c2-b8f5-9892163cb89b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.291543 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-kube-api-access-r99wg" (OuterVolumeSpecName: "kube-api-access-r99wg") pod "a2c9a47c-68c5-47c2-b8f5-9892163cb89b" (UID: "a2c9a47c-68c5-47c2-b8f5-9892163cb89b"). InnerVolumeSpecName "kube-api-access-r99wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.344249 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2c9a47c-68c5-47c2-b8f5-9892163cb89b" (UID: "a2c9a47c-68c5-47c2-b8f5-9892163cb89b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.387137 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.387174 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r99wg\" (UniqueName: \"kubernetes.io/projected/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-kube-api-access-r99wg\") on node \"crc\" DevicePath \"\"" Jan 23 12:07:58 crc kubenswrapper[4865]: I0123 12:07:58.387187 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c9a47c-68c5-47c2-b8f5-9892163cb89b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.080118 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" event={"ID":"50ab40ef-54b8-4392-89ad-6b73c346c225","Type":"ContainerStarted","Data":"92d4c517ce6499ebcbda5be7b0086e4746751e676cc9d2a3ff865034f2adc980"} Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.080983 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.081250 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" event={"ID":"dbfec6f5-80b4-480f-a958-c3107b2776c0","Type":"ContainerStarted","Data":"129bfde63977859660c6eb3aa9e50a03c29e7268576ca70bbc6f2ad00f8febc8"} Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.081652 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.084008 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k7jx" event={"ID":"a2c9a47c-68c5-47c2-b8f5-9892163cb89b","Type":"ContainerDied","Data":"1bf6fa22a8ee93cfc737ee9782c05b33df77bd989e14cb7b9db13def11841723"} Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.084073 4865 scope.go:117] "RemoveContainer" containerID="000c4e8db3e3a13dc6856c27616241162d3e6a7f29d22d51e7891a12322dba0c" Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.084235 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k7jx" Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.117977 4865 scope.go:117] "RemoveContainer" containerID="6009b9179d32fcb39578dcce9304ca02d07f42dbd2470f12d74bfb4c8e52a809" Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.123591 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podStartSLOduration=4.031423427 podStartE2EDuration="1m13.123566945s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.842349539 +0000 UTC m=+853.011421775" lastFinishedPulling="2026-01-23 12:07:57.934493067 +0000 UTC m=+922.103565293" observedRunningTime="2026-01-23 12:07:59.110028073 +0000 UTC m=+923.279100299" watchObservedRunningTime="2026-01-23 12:07:59.123566945 +0000 UTC m=+923.292639171" Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.157512 4865 scope.go:117] "RemoveContainer" containerID="3f0e5989262250616efd1bd969a3da0d3bc92a854ceff9361c6806463a0ccb04" Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.170808 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podStartSLOduration=4.070379596 podStartE2EDuration="1m13.170787894s" podCreationTimestamp="2026-01-23 12:06:46 +0000 UTC" firstStartedPulling="2026-01-23 12:06:48.833392532 +0000 UTC m=+853.002464758" lastFinishedPulling="2026-01-23 12:07:57.93380084 +0000 UTC m=+922.102873056" observedRunningTime="2026-01-23 12:07:59.135474307 +0000 UTC m=+923.304546553" watchObservedRunningTime="2026-01-23 12:07:59.170787894 +0000 UTC m=+923.339860120" Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.207678 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k7jx"] Jan 23 12:07:59 crc kubenswrapper[4865]: I0123 12:07:59.224286 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9k7jx"] Jan 23 12:08:00 crc kubenswrapper[4865]: I0123 12:08:00.125363 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" path="/var/lib/kubelet/pods/a2c9a47c-68c5-47c2-b8f5-9892163cb89b/volumes" Jan 23 12:08:06 crc kubenswrapper[4865]: I0123 12:08:06.941824 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:08:06 crc kubenswrapper[4865]: I0123 12:08:06.944886 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:08:07 crc kubenswrapper[4865]: I0123 12:08:07.258209 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:08:07 crc kubenswrapper[4865]: I0123 12:08:07.299242 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:08:07 crc kubenswrapper[4865]: I0123 12:08:07.582401 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.169195 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tg8g6"] Jan 23 12:08:17 crc kubenswrapper[4865]: E0123 12:08:17.169960 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerName="extract-utilities" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.169977 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerName="extract-utilities" Jan 23 12:08:17 crc kubenswrapper[4865]: E0123 12:08:17.169990 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerName="registry-server" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.169997 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerName="registry-server" Jan 23 12:08:17 crc kubenswrapper[4865]: E0123 12:08:17.170011 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerName="extract-content" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.170017 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerName="extract-content" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.170183 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c9a47c-68c5-47c2-b8f5-9892163cb89b" containerName="registry-server" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.172227 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.191173 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tg8g6"] Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.291391 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l46w9\" (UniqueName: \"kubernetes.io/projected/a23e4222-f3ad-410f-bba7-01180f7691e0-kube-api-access-l46w9\") pod \"certified-operators-tg8g6\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.291789 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-catalog-content\") pod \"certified-operators-tg8g6\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.291894 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-utilities\") pod \"certified-operators-tg8g6\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.392855 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-catalog-content\") pod \"certified-operators-tg8g6\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.392903 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-utilities\") pod \"certified-operators-tg8g6\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.392933 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l46w9\" (UniqueName: \"kubernetes.io/projected/a23e4222-f3ad-410f-bba7-01180f7691e0-kube-api-access-l46w9\") pod \"certified-operators-tg8g6\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.393441 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-catalog-content\") pod \"certified-operators-tg8g6\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.393578 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-utilities\") pod \"certified-operators-tg8g6\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.415473 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l46w9\" (UniqueName: \"kubernetes.io/projected/a23e4222-f3ad-410f-bba7-01180f7691e0-kube-api-access-l46w9\") pod \"certified-operators-tg8g6\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:17 crc kubenswrapper[4865]: I0123 12:08:17.490882 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:18 crc kubenswrapper[4865]: I0123 12:08:18.029703 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tg8g6"] Jan 23 12:08:18 crc kubenswrapper[4865]: I0123 12:08:18.236042 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg8g6" event={"ID":"a23e4222-f3ad-410f-bba7-01180f7691e0","Type":"ContainerStarted","Data":"ff63d57bdfbe1058643ed26b6d840609779bd665c3191d305e39a8597bfc0ae1"} Jan 23 12:08:18 crc kubenswrapper[4865]: I0123 12:08:18.236096 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg8g6" event={"ID":"a23e4222-f3ad-410f-bba7-01180f7691e0","Type":"ContainerStarted","Data":"d4cfed7c9dbd33ddffd9f460c3419471e22fe3c08423da82d1741e788443d5d0"} Jan 23 12:08:19 crc kubenswrapper[4865]: I0123 12:08:19.243388 4865 generic.go:334] "Generic (PLEG): container finished" podID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerID="ff63d57bdfbe1058643ed26b6d840609779bd665c3191d305e39a8597bfc0ae1" exitCode=0 Jan 23 12:08:19 crc kubenswrapper[4865]: I0123 12:08:19.243770 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg8g6" event={"ID":"a23e4222-f3ad-410f-bba7-01180f7691e0","Type":"ContainerDied","Data":"ff63d57bdfbe1058643ed26b6d840609779bd665c3191d305e39a8597bfc0ae1"} Jan 23 12:08:20 crc kubenswrapper[4865]: I0123 12:08:20.264133 4865 generic.go:334] "Generic (PLEG): container finished" podID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerID="d3b32386f7464c699d33c9f2f779e19259a9665be9fefed04ef0f6f74796430b" exitCode=0 Jan 23 12:08:20 crc kubenswrapper[4865]: I0123 12:08:20.264329 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg8g6" event={"ID":"a23e4222-f3ad-410f-bba7-01180f7691e0","Type":"ContainerDied","Data":"d3b32386f7464c699d33c9f2f779e19259a9665be9fefed04ef0f6f74796430b"} Jan 23 12:08:21 crc kubenswrapper[4865]: I0123 12:08:21.274374 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg8g6" event={"ID":"a23e4222-f3ad-410f-bba7-01180f7691e0","Type":"ContainerStarted","Data":"e40f2e2b5803f4ba7d93de9648f707475af218e7a516dbb9f68feb391e7547af"} Jan 23 12:08:21 crc kubenswrapper[4865]: I0123 12:08:21.308921 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tg8g6" podStartSLOduration=2.902104489 podStartE2EDuration="4.308903748s" podCreationTimestamp="2026-01-23 12:08:17 +0000 UTC" firstStartedPulling="2026-01-23 12:08:19.246482316 +0000 UTC m=+943.415554542" lastFinishedPulling="2026-01-23 12:08:20.653281575 +0000 UTC m=+944.822353801" observedRunningTime="2026-01-23 12:08:21.304051888 +0000 UTC m=+945.473124114" watchObservedRunningTime="2026-01-23 12:08:21.308903748 +0000 UTC m=+945.477975964" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.420915 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dc8ff8c5-mzpvc"] Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.422257 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.425110 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.425301 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.425903 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.426029 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jxsb6" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.440906 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc8ff8c5-mzpvc"] Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.492426 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54d4b677c7-vrhzp"] Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.493472 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.496063 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.503437 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54d4b677c7-vrhzp"] Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.571683 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-config\") pod \"dnsmasq-dns-54d4b677c7-vrhzp\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.571757 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2q4\" (UniqueName: \"kubernetes.io/projected/92d92cd0-2f42-47e4-9b28-d3c3b596f818-kube-api-access-zb2q4\") pod \"dnsmasq-dns-54d4b677c7-vrhzp\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.571781 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3b2411-8af8-487c-8906-f7b035e74ae0-config\") pod \"dnsmasq-dns-dc8ff8c5-mzpvc\" (UID: \"0b3b2411-8af8-487c-8906-f7b035e74ae0\") " pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.571807 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h27k8\" (UniqueName: \"kubernetes.io/projected/0b3b2411-8af8-487c-8906-f7b035e74ae0-kube-api-access-h27k8\") pod \"dnsmasq-dns-dc8ff8c5-mzpvc\" (UID: \"0b3b2411-8af8-487c-8906-f7b035e74ae0\") " pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.571821 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-dns-svc\") pod \"dnsmasq-dns-54d4b677c7-vrhzp\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.673003 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-config\") pod \"dnsmasq-dns-54d4b677c7-vrhzp\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.673076 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb2q4\" (UniqueName: \"kubernetes.io/projected/92d92cd0-2f42-47e4-9b28-d3c3b596f818-kube-api-access-zb2q4\") pod \"dnsmasq-dns-54d4b677c7-vrhzp\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.673101 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3b2411-8af8-487c-8906-f7b035e74ae0-config\") pod \"dnsmasq-dns-dc8ff8c5-mzpvc\" (UID: \"0b3b2411-8af8-487c-8906-f7b035e74ae0\") " pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.673148 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h27k8\" (UniqueName: \"kubernetes.io/projected/0b3b2411-8af8-487c-8906-f7b035e74ae0-kube-api-access-h27k8\") pod \"dnsmasq-dns-dc8ff8c5-mzpvc\" (UID: \"0b3b2411-8af8-487c-8906-f7b035e74ae0\") " pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.673175 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-dns-svc\") pod \"dnsmasq-dns-54d4b677c7-vrhzp\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.674210 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3b2411-8af8-487c-8906-f7b035e74ae0-config\") pod \"dnsmasq-dns-dc8ff8c5-mzpvc\" (UID: \"0b3b2411-8af8-487c-8906-f7b035e74ae0\") " pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.674218 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-config\") pod \"dnsmasq-dns-54d4b677c7-vrhzp\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.674249 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-dns-svc\") pod \"dnsmasq-dns-54d4b677c7-vrhzp\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.692064 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h27k8\" (UniqueName: \"kubernetes.io/projected/0b3b2411-8af8-487c-8906-f7b035e74ae0-kube-api-access-h27k8\") pod \"dnsmasq-dns-dc8ff8c5-mzpvc\" (UID: \"0b3b2411-8af8-487c-8906-f7b035e74ae0\") " pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.705425 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb2q4\" (UniqueName: \"kubernetes.io/projected/92d92cd0-2f42-47e4-9b28-d3c3b596f818-kube-api-access-zb2q4\") pod \"dnsmasq-dns-54d4b677c7-vrhzp\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.738358 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" Jan 23 12:08:23 crc kubenswrapper[4865]: I0123 12:08:23.806447 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:08:24 crc kubenswrapper[4865]: I0123 12:08:24.342835 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dc8ff8c5-mzpvc"] Jan 23 12:08:24 crc kubenswrapper[4865]: W0123 12:08:24.349215 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b3b2411_8af8_487c_8906_f7b035e74ae0.slice/crio-392e703b431d08b1c5fe29de3bffa127eb2d301dde187aeb381f9e6b483d1a25 WatchSource:0}: Error finding container 392e703b431d08b1c5fe29de3bffa127eb2d301dde187aeb381f9e6b483d1a25: Status 404 returned error can't find the container with id 392e703b431d08b1c5fe29de3bffa127eb2d301dde187aeb381f9e6b483d1a25 Jan 23 12:08:24 crc kubenswrapper[4865]: I0123 12:08:24.425729 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54d4b677c7-vrhzp"] Jan 23 12:08:24 crc kubenswrapper[4865]: W0123 12:08:24.429556 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92d92cd0_2f42_47e4_9b28_d3c3b596f818.slice/crio-b732894e7f294dae4f775f2b17b54ea5ec7a41484ddfd89d1a5d86e48f7c3bc8 WatchSource:0}: Error finding container b732894e7f294dae4f775f2b17b54ea5ec7a41484ddfd89d1a5d86e48f7c3bc8: Status 404 returned error can't find the container with id b732894e7f294dae4f775f2b17b54ea5ec7a41484ddfd89d1a5d86e48f7c3bc8 Jan 23 12:08:25 crc kubenswrapper[4865]: I0123 12:08:25.305877 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" event={"ID":"0b3b2411-8af8-487c-8906-f7b035e74ae0","Type":"ContainerStarted","Data":"392e703b431d08b1c5fe29de3bffa127eb2d301dde187aeb381f9e6b483d1a25"} Jan 23 12:08:25 crc kubenswrapper[4865]: I0123 12:08:25.307566 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" event={"ID":"92d92cd0-2f42-47e4-9b28-d3c3b596f818","Type":"ContainerStarted","Data":"b732894e7f294dae4f775f2b17b54ea5ec7a41484ddfd89d1a5d86e48f7c3bc8"} Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.386510 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d4b677c7-vrhzp"] Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.440654 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cf7dc6df-xpjmc"] Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.442681 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.469753 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhggg\" (UniqueName: \"kubernetes.io/projected/15c39433-1f67-4c52-9a8f-df981b9af880-kube-api-access-rhggg\") pod \"dnsmasq-dns-78cf7dc6df-xpjmc\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.469814 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-config\") pod \"dnsmasq-dns-78cf7dc6df-xpjmc\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.469842 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-dns-svc\") pod \"dnsmasq-dns-78cf7dc6df-xpjmc\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.471430 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cf7dc6df-xpjmc"] Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.571116 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhggg\" (UniqueName: \"kubernetes.io/projected/15c39433-1f67-4c52-9a8f-df981b9af880-kube-api-access-rhggg\") pod \"dnsmasq-dns-78cf7dc6df-xpjmc\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.571176 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-config\") pod \"dnsmasq-dns-78cf7dc6df-xpjmc\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.571200 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-dns-svc\") pod \"dnsmasq-dns-78cf7dc6df-xpjmc\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.572019 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-dns-svc\") pod \"dnsmasq-dns-78cf7dc6df-xpjmc\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.573182 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-config\") pod \"dnsmasq-dns-78cf7dc6df-xpjmc\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.600218 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhggg\" (UniqueName: \"kubernetes.io/projected/15c39433-1f67-4c52-9a8f-df981b9af880-kube-api-access-rhggg\") pod \"dnsmasq-dns-78cf7dc6df-xpjmc\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.776764 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.887859 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc8ff8c5-mzpvc"] Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.927174 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d448fc49f-nkrhv"] Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.930014 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:26 crc kubenswrapper[4865]: I0123 12:08:26.964673 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d448fc49f-nkrhv"] Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.107913 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-config\") pod \"dnsmasq-dns-d448fc49f-nkrhv\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.107979 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wjwg\" (UniqueName: \"kubernetes.io/projected/45fe2611-a28b-49e2-9c01-b50eebc448dd-kube-api-access-4wjwg\") pod \"dnsmasq-dns-d448fc49f-nkrhv\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.108073 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-dns-svc\") pod \"dnsmasq-dns-d448fc49f-nkrhv\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.209955 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-config\") pod \"dnsmasq-dns-d448fc49f-nkrhv\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.210009 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wjwg\" (UniqueName: \"kubernetes.io/projected/45fe2611-a28b-49e2-9c01-b50eebc448dd-kube-api-access-4wjwg\") pod \"dnsmasq-dns-d448fc49f-nkrhv\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.210054 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-dns-svc\") pod \"dnsmasq-dns-d448fc49f-nkrhv\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.210900 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-config\") pod \"dnsmasq-dns-d448fc49f-nkrhv\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.211407 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-dns-svc\") pod \"dnsmasq-dns-d448fc49f-nkrhv\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.233400 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wjwg\" (UniqueName: \"kubernetes.io/projected/45fe2611-a28b-49e2-9c01-b50eebc448dd-kube-api-access-4wjwg\") pod \"dnsmasq-dns-d448fc49f-nkrhv\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.265584 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.491019 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.491113 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.621329 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.737164 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.769784 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.778223 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.779615 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.863580 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.864616 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.865244 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k2tck" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.865306 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.865444 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.871302 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967270 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967325 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10a07490-f361-43e5-8d3e-a8bd917b3b84-pod-info\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967371 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtwc\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-kube-api-access-jqtwc\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967403 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967441 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967463 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-config-data\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967494 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967514 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10a07490-f361-43e5-8d3e-a8bd917b3b84-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967534 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-server-conf\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967561 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:27 crc kubenswrapper[4865]: I0123 12:08:27.967583 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.071235 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072100 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-server-conf\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072154 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072171 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072209 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072248 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10a07490-f361-43e5-8d3e-a8bd917b3b84-pod-info\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072279 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtwc\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-kube-api-access-jqtwc\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072309 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072337 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072355 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-config-data\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072385 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.072408 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10a07490-f361-43e5-8d3e-a8bd917b3b84-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.075788 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.075883 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.076169 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-config-data\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.076424 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.079707 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.080544 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.080844 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-server-conf\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.081028 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.081115 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.081370 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rrxrn" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.083286 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10a07490-f361-43e5-8d3e-a8bd917b3b84-pod-info\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.089048 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.089238 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.089357 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.089492 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.093073 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10a07490-f361-43e5-8d3e-a8bd917b3b84-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.096457 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.098765 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.113395 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtwc\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-kube-api-access-jqtwc\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.114681 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.174759 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.174822 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.174858 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.174879 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9p8c\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-kube-api-access-r9p8c\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.174914 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ebb7983c-3aed-42f5-8635-8188f7abb9d5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.174956 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.174979 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.175028 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.175049 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ebb7983c-3aed-42f5-8635-8188f7abb9d5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.175071 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.175112 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.220346 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cf7dc6df-xpjmc"] Jan 23 12:08:28 crc kubenswrapper[4865]: W0123 12:08:28.251106 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15c39433_1f67_4c52_9a8f_df981b9af880.slice/crio-50e6491cb2d099a0fd0a0d557dfd04df7077d1aa98bd26a82263d7bf38fa7ebf WatchSource:0}: Error finding container 50e6491cb2d099a0fd0a0d557dfd04df7077d1aa98bd26a82263d7bf38fa7ebf: Status 404 returned error can't find the container with id 50e6491cb2d099a0fd0a0d557dfd04df7077d1aa98bd26a82263d7bf38fa7ebf Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276072 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ebb7983c-3aed-42f5-8635-8188f7abb9d5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276128 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276154 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276178 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ebb7983c-3aed-42f5-8635-8188f7abb9d5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276219 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276273 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276329 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276348 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276364 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.276380 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9p8c\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-kube-api-access-r9p8c\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.277916 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.278043 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.278910 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.279094 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.279232 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.279314 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.287124 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ebb7983c-3aed-42f5-8635-8188f7abb9d5-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.292384 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.297279 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.297289 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ebb7983c-3aed-42f5-8635-8188f7abb9d5-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.313900 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d448fc49f-nkrhv"] Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.319209 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9p8c\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-kube-api-access-r9p8c\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.322954 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: W0123 12:08:28.335360 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45fe2611_a28b_49e2_9c01_b50eebc448dd.slice/crio-b1b9eaf55c57946ddccc662f663d67ac759938bf83d10b9d11fa3b5b62e4f697 WatchSource:0}: Error finding container b1b9eaf55c57946ddccc662f663d67ac759938bf83d10b9d11fa3b5b62e4f697: Status 404 returned error can't find the container with id b1b9eaf55c57946ddccc662f663d67ac759938bf83d10b9d11fa3b5b62e4f697 Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.356974 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" event={"ID":"15c39433-1f67-4c52-9a8f-df981b9af880","Type":"ContainerStarted","Data":"50e6491cb2d099a0fd0a0d557dfd04df7077d1aa98bd26a82263d7bf38fa7ebf"} Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.416216 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.467783 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tg8g6"] Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.483254 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.530971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " pod="openstack/rabbitmq-server-0" Jan 23 12:08:28 crc kubenswrapper[4865]: I0123 12:08:28.810375 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.156410 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.359586 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.365036 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.369511 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.369988 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-nhqgr" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.370256 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.370432 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.378824 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ebb7983c-3aed-42f5-8635-8188f7abb9d5","Type":"ContainerStarted","Data":"0c65d0447f85cb26b3cf25ce7b89d5bc5136452955764b64393bc357676606dd"} Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.381105 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.388945 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.389575 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" event={"ID":"45fe2611-a28b-49e2-9c01-b50eebc448dd","Type":"ContainerStarted","Data":"b1b9eaf55c57946ddccc662f663d67ac759938bf83d10b9d11fa3b5b62e4f697"} Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.501059 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.501119 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/78884295-a3de-4e00-bcc4-6a1627b50717-kolla-config\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.501150 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xzk\" (UniqueName: \"kubernetes.io/projected/78884295-a3de-4e00-bcc4-6a1627b50717-kube-api-access-j5xzk\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.501174 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/78884295-a3de-4e00-bcc4-6a1627b50717-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.501195 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78884295-a3de-4e00-bcc4-6a1627b50717-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.501217 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/78884295-a3de-4e00-bcc4-6a1627b50717-config-data-default\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.501250 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/78884295-a3de-4e00-bcc4-6a1627b50717-config-data-generated\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.501271 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78884295-a3de-4e00-bcc4-6a1627b50717-operator-scripts\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.566192 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 12:08:29 crc kubenswrapper[4865]: W0123 12:08:29.600736 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a07490_f361_43e5_8d3e_a8bd917b3b84.slice/crio-675b08be31113371ae4c652512134af5515c4a0355844c312dbd7907ec942bee WatchSource:0}: Error finding container 675b08be31113371ae4c652512134af5515c4a0355844c312dbd7907ec942bee: Status 404 returned error can't find the container with id 675b08be31113371ae4c652512134af5515c4a0355844c312dbd7907ec942bee Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.602418 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.602461 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/78884295-a3de-4e00-bcc4-6a1627b50717-kolla-config\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.602496 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5xzk\" (UniqueName: \"kubernetes.io/projected/78884295-a3de-4e00-bcc4-6a1627b50717-kube-api-access-j5xzk\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.602527 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/78884295-a3de-4e00-bcc4-6a1627b50717-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.602548 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78884295-a3de-4e00-bcc4-6a1627b50717-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.602571 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/78884295-a3de-4e00-bcc4-6a1627b50717-config-data-default\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.602617 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/78884295-a3de-4e00-bcc4-6a1627b50717-config-data-generated\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.602636 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78884295-a3de-4e00-bcc4-6a1627b50717-operator-scripts\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.604445 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78884295-a3de-4e00-bcc4-6a1627b50717-operator-scripts\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.604645 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.605154 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/78884295-a3de-4e00-bcc4-6a1627b50717-kolla-config\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.605381 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/78884295-a3de-4e00-bcc4-6a1627b50717-config-data-generated\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.605857 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/78884295-a3de-4e00-bcc4-6a1627b50717-config-data-default\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.622896 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78884295-a3de-4e00-bcc4-6a1627b50717-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.636279 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/78884295-a3de-4e00-bcc4-6a1627b50717-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.642458 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.642575 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5xzk\" (UniqueName: \"kubernetes.io/projected/78884295-a3de-4e00-bcc4-6a1627b50717-kube-api-access-j5xzk\") pod \"openstack-galera-0\" (UID: \"78884295-a3de-4e00-bcc4-6a1627b50717\") " pod="openstack/openstack-galera-0" Jan 23 12:08:29 crc kubenswrapper[4865]: I0123 12:08:29.750737 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.403387 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tg8g6" podUID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerName="registry-server" containerID="cri-o://e40f2e2b5803f4ba7d93de9648f707475af218e7a516dbb9f68feb391e7547af" gracePeriod=2 Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.404866 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10a07490-f361-43e5-8d3e-a8bd917b3b84","Type":"ContainerStarted","Data":"675b08be31113371ae4c652512134af5515c4a0355844c312dbd7907ec942bee"} Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.432916 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.434296 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.439820 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.440029 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-q6psp" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.440262 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.440355 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.446791 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.521664 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cf30925-0355-42db-9895-f23a97fca08e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.521702 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cf30925-0355-42db-9895-f23a97fca08e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.521728 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf30925-0355-42db-9895-f23a97fca08e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.521747 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cf30925-0355-42db-9895-f23a97fca08e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.521780 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.522921 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cf30925-0355-42db-9895-f23a97fca08e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.522950 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnw67\" (UniqueName: \"kubernetes.io/projected/5cf30925-0355-42db-9895-f23a97fca08e-kube-api-access-nnw67\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.522986 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cf30925-0355-42db-9895-f23a97fca08e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.624628 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cf30925-0355-42db-9895-f23a97fca08e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.624693 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cf30925-0355-42db-9895-f23a97fca08e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.624725 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf30925-0355-42db-9895-f23a97fca08e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.624761 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cf30925-0355-42db-9895-f23a97fca08e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.624800 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.624869 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cf30925-0355-42db-9895-f23a97fca08e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.624896 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnw67\" (UniqueName: \"kubernetes.io/projected/5cf30925-0355-42db-9895-f23a97fca08e-kube-api-access-nnw67\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.624932 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cf30925-0355-42db-9895-f23a97fca08e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.638308 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.638491 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5cf30925-0355-42db-9895-f23a97fca08e-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.646159 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cf30925-0355-42db-9895-f23a97fca08e-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.634085 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5cf30925-0355-42db-9895-f23a97fca08e-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.660623 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cf30925-0355-42db-9895-f23a97fca08e-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.662980 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5cf30925-0355-42db-9895-f23a97fca08e-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.663525 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5cf30925-0355-42db-9895-f23a97fca08e-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.676391 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnw67\" (UniqueName: \"kubernetes.io/projected/5cf30925-0355-42db-9895-f23a97fca08e-kube-api-access-nnw67\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.723192 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"5cf30925-0355-42db-9895-f23a97fca08e\") " pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.776178 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.827500 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.828549 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.835247 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.835492 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-w2d7j" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.835526 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.879451 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.933507 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc7b5e62-4146-4417-81e6-1ca9633eafe9-kolla-config\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.933553 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7b5e62-4146-4417-81e6-1ca9633eafe9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.933589 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc7b5e62-4146-4417-81e6-1ca9633eafe9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.933658 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7c5s\" (UniqueName: \"kubernetes.io/projected/fc7b5e62-4146-4417-81e6-1ca9633eafe9-kube-api-access-d7c5s\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.933680 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc7b5e62-4146-4417-81e6-1ca9633eafe9-config-data\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:30 crc kubenswrapper[4865]: I0123 12:08:30.960000 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.035959 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7c5s\" (UniqueName: \"kubernetes.io/projected/fc7b5e62-4146-4417-81e6-1ca9633eafe9-kube-api-access-d7c5s\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.037303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc7b5e62-4146-4417-81e6-1ca9633eafe9-config-data\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.037376 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fc7b5e62-4146-4417-81e6-1ca9633eafe9-config-data\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.037497 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc7b5e62-4146-4417-81e6-1ca9633eafe9-kolla-config\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.037519 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7b5e62-4146-4417-81e6-1ca9633eafe9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.037568 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc7b5e62-4146-4417-81e6-1ca9633eafe9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.038336 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc7b5e62-4146-4417-81e6-1ca9633eafe9-kolla-config\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.045838 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc7b5e62-4146-4417-81e6-1ca9633eafe9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.051315 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7b5e62-4146-4417-81e6-1ca9633eafe9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.055790 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7c5s\" (UniqueName: \"kubernetes.io/projected/fc7b5e62-4146-4417-81e6-1ca9633eafe9-kube-api-access-d7c5s\") pod \"memcached-0\" (UID: \"fc7b5e62-4146-4417-81e6-1ca9633eafe9\") " pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.165042 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.447876 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"78884295-a3de-4e00-bcc4-6a1627b50717","Type":"ContainerStarted","Data":"213e75bf5abe35c4c1ab697e990ca8ad3abd3e4be41b0fde4328aa68aabf1ee4"} Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.456016 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.457672 4865 generic.go:334] "Generic (PLEG): container finished" podID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerID="e40f2e2b5803f4ba7d93de9648f707475af218e7a516dbb9f68feb391e7547af" exitCode=0 Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.457718 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg8g6" event={"ID":"a23e4222-f3ad-410f-bba7-01180f7691e0","Type":"ContainerDied","Data":"e40f2e2b5803f4ba7d93de9648f707475af218e7a516dbb9f68feb391e7547af"} Jan 23 12:08:31 crc kubenswrapper[4865]: W0123 12:08:31.485193 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cf30925_0355_42db_9895_f23a97fca08e.slice/crio-84d8fce9cab0755d68c3a3d5ee0e8a3dc2af42de2960489b4933dc065ac42db3 WatchSource:0}: Error finding container 84d8fce9cab0755d68c3a3d5ee0e8a3dc2af42de2960489b4933dc065ac42db3: Status 404 returned error can't find the container with id 84d8fce9cab0755d68c3a3d5ee0e8a3dc2af42de2960489b4933dc065ac42db3 Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.711492 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 23 12:08:31 crc kubenswrapper[4865]: W0123 12:08:31.720827 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc7b5e62_4146_4417_81e6_1ca9633eafe9.slice/crio-c99cea9225dfc1cbb3241d15757da79d0da9848db2805b8d81d570117cd4d298 WatchSource:0}: Error finding container c99cea9225dfc1cbb3241d15757da79d0da9848db2805b8d81d570117cd4d298: Status 404 returned error can't find the container with id c99cea9225dfc1cbb3241d15757da79d0da9848db2805b8d81d570117cd4d298 Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.800879 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.855160 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-catalog-content\") pod \"a23e4222-f3ad-410f-bba7-01180f7691e0\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.855231 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l46w9\" (UniqueName: \"kubernetes.io/projected/a23e4222-f3ad-410f-bba7-01180f7691e0-kube-api-access-l46w9\") pod \"a23e4222-f3ad-410f-bba7-01180f7691e0\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.855398 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-utilities\") pod \"a23e4222-f3ad-410f-bba7-01180f7691e0\" (UID: \"a23e4222-f3ad-410f-bba7-01180f7691e0\") " Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.856584 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-utilities" (OuterVolumeSpecName: "utilities") pod "a23e4222-f3ad-410f-bba7-01180f7691e0" (UID: "a23e4222-f3ad-410f-bba7-01180f7691e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.871839 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23e4222-f3ad-410f-bba7-01180f7691e0-kube-api-access-l46w9" (OuterVolumeSpecName: "kube-api-access-l46w9") pod "a23e4222-f3ad-410f-bba7-01180f7691e0" (UID: "a23e4222-f3ad-410f-bba7-01180f7691e0"). InnerVolumeSpecName "kube-api-access-l46w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.922454 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a23e4222-f3ad-410f-bba7-01180f7691e0" (UID: "a23e4222-f3ad-410f-bba7-01180f7691e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.957452 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l46w9\" (UniqueName: \"kubernetes.io/projected/a23e4222-f3ad-410f-bba7-01180f7691e0-kube-api-access-l46w9\") on node \"crc\" DevicePath \"\"" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.957489 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:08:31 crc kubenswrapper[4865]: I0123 12:08:31.957500 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a23e4222-f3ad-410f-bba7-01180f7691e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.490155 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5cf30925-0355-42db-9895-f23a97fca08e","Type":"ContainerStarted","Data":"84d8fce9cab0755d68c3a3d5ee0e8a3dc2af42de2960489b4933dc065ac42db3"} Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.495887 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tg8g6" event={"ID":"a23e4222-f3ad-410f-bba7-01180f7691e0","Type":"ContainerDied","Data":"d4cfed7c9dbd33ddffd9f460c3419471e22fe3c08423da82d1741e788443d5d0"} Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.495903 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tg8g6" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.495937 4865 scope.go:117] "RemoveContainer" containerID="e40f2e2b5803f4ba7d93de9648f707475af218e7a516dbb9f68feb391e7547af" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.503673 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fc7b5e62-4146-4417-81e6-1ca9633eafe9","Type":"ContainerStarted","Data":"c99cea9225dfc1cbb3241d15757da79d0da9848db2805b8d81d570117cd4d298"} Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.586894 4865 scope.go:117] "RemoveContainer" containerID="d3b32386f7464c699d33c9f2f779e19259a9665be9fefed04ef0f6f74796430b" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.588783 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tg8g6"] Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.604323 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tg8g6"] Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.667736 4865 scope.go:117] "RemoveContainer" containerID="ff63d57bdfbe1058643ed26b6d840609779bd665c3191d305e39a8597bfc0ae1" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.850327 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 12:08:32 crc kubenswrapper[4865]: E0123 12:08:32.850700 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerName="extract-content" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.850713 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerName="extract-content" Jan 23 12:08:32 crc kubenswrapper[4865]: E0123 12:08:32.850727 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerName="extract-utilities" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.850733 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerName="extract-utilities" Jan 23 12:08:32 crc kubenswrapper[4865]: E0123 12:08:32.850746 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerName="registry-server" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.850751 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerName="registry-server" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.850882 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a23e4222-f3ad-410f-bba7-01180f7691e0" containerName="registry-server" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.851350 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.862648 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nwb79" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.872939 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4k6\" (UniqueName: \"kubernetes.io/projected/813e1f0e-32d5-4237-8722-440164262885-kube-api-access-9v4k6\") pod \"kube-state-metrics-0\" (UID: \"813e1f0e-32d5-4237-8722-440164262885\") " pod="openstack/kube-state-metrics-0" Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.879926 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 12:08:32 crc kubenswrapper[4865]: I0123 12:08:32.974128 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4k6\" (UniqueName: \"kubernetes.io/projected/813e1f0e-32d5-4237-8722-440164262885-kube-api-access-9v4k6\") pod \"kube-state-metrics-0\" (UID: \"813e1f0e-32d5-4237-8722-440164262885\") " pod="openstack/kube-state-metrics-0" Jan 23 12:08:33 crc kubenswrapper[4865]: I0123 12:08:33.011628 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4k6\" (UniqueName: \"kubernetes.io/projected/813e1f0e-32d5-4237-8722-440164262885-kube-api-access-9v4k6\") pod \"kube-state-metrics-0\" (UID: \"813e1f0e-32d5-4237-8722-440164262885\") " pod="openstack/kube-state-metrics-0" Jan 23 12:08:33 crc kubenswrapper[4865]: I0123 12:08:33.172776 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 12:08:33 crc kubenswrapper[4865]: I0123 12:08:33.755782 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 12:08:33 crc kubenswrapper[4865]: W0123 12:08:33.775577 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod813e1f0e_32d5_4237_8722_440164262885.slice/crio-d8f2dab6e275d8d8af18a8c3788f817f71980945ff6700eb4b89cae411f7669e WatchSource:0}: Error finding container d8f2dab6e275d8d8af18a8c3788f817f71980945ff6700eb4b89cae411f7669e: Status 404 returned error can't find the container with id d8f2dab6e275d8d8af18a8c3788f817f71980945ff6700eb4b89cae411f7669e Jan 23 12:08:34 crc kubenswrapper[4865]: I0123 12:08:34.141722 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23e4222-f3ad-410f-bba7-01180f7691e0" path="/var/lib/kubelet/pods/a23e4222-f3ad-410f-bba7-01180f7691e0/volumes" Jan 23 12:08:34 crc kubenswrapper[4865]: I0123 12:08:34.524570 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813e1f0e-32d5-4237-8722-440164262885","Type":"ContainerStarted","Data":"d8f2dab6e275d8d8af18a8c3788f817f71980945ff6700eb4b89cae411f7669e"} Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.104394 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hw472"] Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.106157 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.148919 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-catalog-content\") pod \"redhat-marketplace-hw472\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.148975 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-utilities\") pod \"redhat-marketplace-hw472\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.149037 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx8pb\" (UniqueName: \"kubernetes.io/projected/bba9656c-fa5c-4675-a566-9ea549f1e553-kube-api-access-lx8pb\") pod \"redhat-marketplace-hw472\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.157506 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hw472"] Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.262617 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-catalog-content\") pod \"redhat-marketplace-hw472\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.262678 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-utilities\") pod \"redhat-marketplace-hw472\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.262766 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx8pb\" (UniqueName: \"kubernetes.io/projected/bba9656c-fa5c-4675-a566-9ea549f1e553-kube-api-access-lx8pb\") pod \"redhat-marketplace-hw472\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.264828 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-catalog-content\") pod \"redhat-marketplace-hw472\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.265258 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-utilities\") pod \"redhat-marketplace-hw472\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.291142 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx8pb\" (UniqueName: \"kubernetes.io/projected/bba9656c-fa5c-4675-a566-9ea549f1e553-kube-api-access-lx8pb\") pod \"redhat-marketplace-hw472\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.475826 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.684544 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hz4vm"] Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.688913 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.694745 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.695218 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4kkf6" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.695386 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.733474 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hz4vm"] Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.833617 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-spv64"] Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.835021 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.863350 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-spv64"] Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.870546 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-scripts\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.870640 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-ovn-controller-tls-certs\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.870668 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-var-log-ovn\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.870689 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-var-run\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.870727 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfcdf\" (UniqueName: \"kubernetes.io/projected/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-kube-api-access-mfcdf\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.870747 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-combined-ca-bundle\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.870764 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-var-run-ovn\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972656 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-scripts\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972726 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4slhx\" (UniqueName: \"kubernetes.io/projected/11be7549-5b2b-49e9-b11e-7035922b3673-kube-api-access-4slhx\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972751 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-ovn-controller-tls-certs\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972771 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-var-log-ovn\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972791 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-var-run\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972812 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-var-lib\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972834 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-var-log\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972855 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11be7549-5b2b-49e9-b11e-7035922b3673-scripts\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972878 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfcdf\" (UniqueName: \"kubernetes.io/projected/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-kube-api-access-mfcdf\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972899 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-var-run\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972917 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-var-run-ovn\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972934 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-combined-ca-bundle\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.972951 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-etc-ovs\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.973772 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-var-log-ovn\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.974656 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-var-run\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.974759 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-var-run-ovn\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.974866 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-scripts\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.978344 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-ovn-controller-tls-certs\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:36 crc kubenswrapper[4865]: I0123 12:08:36.979139 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-combined-ca-bundle\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.007927 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfcdf\" (UniqueName: \"kubernetes.io/projected/d8331842-a45a-4cbf-a55b-0d8dde7f69eb-kube-api-access-mfcdf\") pod \"ovn-controller-hz4vm\" (UID: \"d8331842-a45a-4cbf-a55b-0d8dde7f69eb\") " pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.073793 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4slhx\" (UniqueName: \"kubernetes.io/projected/11be7549-5b2b-49e9-b11e-7035922b3673-kube-api-access-4slhx\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.073859 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-var-lib\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.073881 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-var-log\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.073905 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11be7549-5b2b-49e9-b11e-7035922b3673-scripts\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.073934 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-var-run\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.073956 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-etc-ovs\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.074147 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-var-lib\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.074210 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-etc-ovs\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.074301 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-var-log\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.074498 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/11be7549-5b2b-49e9-b11e-7035922b3673-var-run\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.076152 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/11be7549-5b2b-49e9-b11e-7035922b3673-scripts\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.106507 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4slhx\" (UniqueName: \"kubernetes.io/projected/11be7549-5b2b-49e9-b11e-7035922b3673-kube-api-access-4slhx\") pod \"ovn-controller-ovs-spv64\" (UID: \"11be7549-5b2b-49e9-b11e-7035922b3673\") " pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.159112 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.303148 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hz4vm" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.470913 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.473471 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.478114 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.478534 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.478868 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.484629 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.489473 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-fdj7x" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.492804 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.582098 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00de89b6-ad6a-46bb-ac17-6be026277b26-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.582258 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00de89b6-ad6a-46bb-ac17-6be026277b26-config\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.582341 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00de89b6-ad6a-46bb-ac17-6be026277b26-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.582494 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00de89b6-ad6a-46bb-ac17-6be026277b26-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.582569 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptcmj\" (UniqueName: \"kubernetes.io/projected/00de89b6-ad6a-46bb-ac17-6be026277b26-kube-api-access-ptcmj\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.582593 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00de89b6-ad6a-46bb-ac17-6be026277b26-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.582642 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00de89b6-ad6a-46bb-ac17-6be026277b26-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.582664 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.684217 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00de89b6-ad6a-46bb-ac17-6be026277b26-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.684281 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptcmj\" (UniqueName: \"kubernetes.io/projected/00de89b6-ad6a-46bb-ac17-6be026277b26-kube-api-access-ptcmj\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.684302 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00de89b6-ad6a-46bb-ac17-6be026277b26-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.684322 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00de89b6-ad6a-46bb-ac17-6be026277b26-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.684340 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.684384 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00de89b6-ad6a-46bb-ac17-6be026277b26-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.684417 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00de89b6-ad6a-46bb-ac17-6be026277b26-config\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.684438 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00de89b6-ad6a-46bb-ac17-6be026277b26-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.685679 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.685741 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/00de89b6-ad6a-46bb-ac17-6be026277b26-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.686134 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00de89b6-ad6a-46bb-ac17-6be026277b26-config\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.690860 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00de89b6-ad6a-46bb-ac17-6be026277b26-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.694971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00de89b6-ad6a-46bb-ac17-6be026277b26-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.695873 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00de89b6-ad6a-46bb-ac17-6be026277b26-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.702083 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00de89b6-ad6a-46bb-ac17-6be026277b26-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.703429 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptcmj\" (UniqueName: \"kubernetes.io/projected/00de89b6-ad6a-46bb-ac17-6be026277b26-kube-api-access-ptcmj\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.734873 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"00de89b6-ad6a-46bb-ac17-6be026277b26\") " pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:37 crc kubenswrapper[4865]: I0123 12:08:37.797433 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.478215 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2d8q5"] Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.479581 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.492889 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2d8q5"] Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.602016 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-utilities\") pod \"redhat-operators-2d8q5\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.602094 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jg2j\" (UniqueName: \"kubernetes.io/projected/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-kube-api-access-4jg2j\") pod \"redhat-operators-2d8q5\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.602159 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-catalog-content\") pod \"redhat-operators-2d8q5\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.704149 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jg2j\" (UniqueName: \"kubernetes.io/projected/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-kube-api-access-4jg2j\") pod \"redhat-operators-2d8q5\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.704674 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-catalog-content\") pod \"redhat-operators-2d8q5\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.704987 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-utilities\") pod \"redhat-operators-2d8q5\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.705884 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-utilities\") pod \"redhat-operators-2d8q5\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.706456 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-catalog-content\") pod \"redhat-operators-2d8q5\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.753761 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jg2j\" (UniqueName: \"kubernetes.io/projected/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-kube-api-access-4jg2j\") pod \"redhat-operators-2d8q5\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:38 crc kubenswrapper[4865]: I0123 12:08:38.807703 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:08:39 crc kubenswrapper[4865]: I0123 12:08:39.973944 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 12:08:39 crc kubenswrapper[4865]: I0123 12:08:39.975072 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:39 crc kubenswrapper[4865]: I0123 12:08:39.977881 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 12:08:39 crc kubenswrapper[4865]: I0123 12:08:39.978058 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xl95m" Jan 23 12:08:39 crc kubenswrapper[4865]: I0123 12:08:39.978158 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 12:08:39 crc kubenswrapper[4865]: I0123 12:08:39.978308 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.009678 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.128265 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vlnv\" (UniqueName: \"kubernetes.io/projected/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-kube-api-access-7vlnv\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.128319 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.128352 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.128377 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.128411 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.128447 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.128472 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.128490 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.229755 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.229811 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.229848 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.229887 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.229906 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.229926 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.229987 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vlnv\" (UniqueName: \"kubernetes.io/projected/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-kube-api-access-7vlnv\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.230004 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.230776 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.231401 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.233329 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-config\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.233442 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.241953 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.256806 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.260250 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.270761 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.274779 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vlnv\" (UniqueName: \"kubernetes.io/projected/87116cbc-3d44-4cde-a97e-ce1fe81f2cb8-kube-api-access-7vlnv\") pod \"ovsdbserver-sb-0\" (UID: \"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8\") " pod="openstack/ovsdbserver-sb-0" Jan 23 12:08:40 crc kubenswrapper[4865]: I0123 12:08:40.292002 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 23 12:09:08 crc kubenswrapper[4865]: E0123 12:09:08.936357 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:08 crc kubenswrapper[4865]: E0123 12:09:08.936952 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:08 crc kubenswrapper[4865]: E0123 12:09:08.937085 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqtwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(10a07490-f361-43e5-8d3e-a8bd917b3b84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:08 crc kubenswrapper[4865]: E0123 12:09:08.938234 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" Jan 23 12:09:09 crc kubenswrapper[4865]: E0123 12:09:09.806428 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/rabbitmq-server-0" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" Jan 23 12:09:11 crc kubenswrapper[4865]: E0123 12:09:11.356127 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:11 crc kubenswrapper[4865]: E0123 12:09:11.356770 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:11 crc kubenswrapper[4865]: E0123 12:09:11.356939 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnw67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(5cf30925-0355-42db-9895-f23a97fca08e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:11 crc kubenswrapper[4865]: E0123 12:09:11.358193 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" Jan 23 12:09:11 crc kubenswrapper[4865]: E0123 12:09:11.823635 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" Jan 23 12:09:11 crc kubenswrapper[4865]: E0123 12:09:11.889010 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:11 crc kubenswrapper[4865]: E0123 12:09:11.889078 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:11 crc kubenswrapper[4865]: E0123 12:09:11.889226 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9p8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(ebb7983c-3aed-42f5-8635-8188f7abb9d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:11 crc kubenswrapper[4865]: E0123 12:09:11.891878 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" Jan 23 12:09:12 crc kubenswrapper[4865]: I0123 12:09:12.112226 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2d8q5"] Jan 23 12:09:12 crc kubenswrapper[4865]: E0123 12:09:12.832702 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-rabbitmq:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" Jan 23 12:09:14 crc kubenswrapper[4865]: E0123 12:09:14.151043 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:14 crc kubenswrapper[4865]: E0123 12:09:14.151464 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:14 crc kubenswrapper[4865]: E0123 12:09:14.151823 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zb2q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-54d4b677c7-vrhzp_openstack(92d92cd0-2f42-47e4-9b28-d3c3b596f818): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:14 crc kubenswrapper[4865]: E0123 12:09:14.153088 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" podUID="92d92cd0-2f42-47e4-9b28-d3c3b596f818" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.867581 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-memcached:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.867931 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-memcached:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.868075 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-memcached:c3923531bcda0b0811b2d5053f189beb,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5bdh687h66bh5f7h89hf4h58fh556h667h575h659h5fbh5f4h68ch55ch7bh56dh58h5ddh7fh88h7dh649h5ddh55ch587h699h677h595h59chf8h67fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d7c5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(fc7b5e62-4146-4417-81e6-1ca9633eafe9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.869258 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="fc7b5e62-4146-4417-81e6-1ca9633eafe9" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.887004 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.887160 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.887476 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4wjwg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-d448fc49f-nkrhv_openstack(45fe2611-a28b-49e2-9c01-b50eebc448dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.888806 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" podUID="45fe2611-a28b-49e2-9c01-b50eebc448dd" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.893533 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.893589 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.893761 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j5xzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(78884295-a3de-4e00-bcc4-6a1627b50717): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.895460 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.936123 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.936216 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.936393 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h27k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-dc8ff8c5-mzpvc_openstack(0b3b2411-8af8-487c-8906-f7b035e74ae0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.937503 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" podUID="0b3b2411-8af8-487c-8906-f7b035e74ae0" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.983863 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.983918 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.984033 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9v4k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(813e1f0e-32d5-4237-8722-440164262885): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Jan 23 12:09:15 crc kubenswrapper[4865]: E0123 12:09:15.985802 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="813e1f0e-32d5-4237-8722-440164262885" Jan 23 12:09:16 crc kubenswrapper[4865]: E0123 12:09:16.012200 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:16 crc kubenswrapper[4865]: E0123 12:09:16.012278 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:16 crc kubenswrapper[4865]: E0123 12:09:16.012439 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rhggg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78cf7dc6df-xpjmc_openstack(15c39433-1f67-4c52-9a8f-df981b9af880): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:16 crc kubenswrapper[4865]: E0123 12:09:16.014160 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" podUID="15c39433-1f67-4c52-9a8f-df981b9af880" Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.026536 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.142516 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-config\") pod \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.142699 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-dns-svc\") pod \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.148070 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-config" (OuterVolumeSpecName: "config") pod "92d92cd0-2f42-47e4-9b28-d3c3b596f818" (UID: "92d92cd0-2f42-47e4-9b28-d3c3b596f818"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.150036 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92d92cd0-2f42-47e4-9b28-d3c3b596f818" (UID: "92d92cd0-2f42-47e4-9b28-d3c3b596f818"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.245835 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb2q4\" (UniqueName: \"kubernetes.io/projected/92d92cd0-2f42-47e4-9b28-d3c3b596f818-kube-api-access-zb2q4\") pod \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\" (UID: \"92d92cd0-2f42-47e4-9b28-d3c3b596f818\") " Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.246354 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.246373 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92d92cd0-2f42-47e4-9b28-d3c3b596f818-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.282689 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d92cd0-2f42-47e4-9b28-d3c3b596f818-kube-api-access-zb2q4" (OuterVolumeSpecName: "kube-api-access-zb2q4") pod "92d92cd0-2f42-47e4-9b28-d3c3b596f818" (UID: "92d92cd0-2f42-47e4-9b28-d3c3b596f818"). InnerVolumeSpecName "kube-api-access-zb2q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.347489 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb2q4\" (UniqueName: \"kubernetes.io/projected/92d92cd0-2f42-47e4-9b28-d3c3b596f818-kube-api-access-zb2q4\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.656403 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hz4vm"] Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.777888 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.792564 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hw472"] Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.843685 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-spv64"] Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.885518 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-spv64" event={"ID":"11be7549-5b2b-49e9-b11e-7035922b3673","Type":"ContainerStarted","Data":"63e5f06a9524dceb8e5e5555b2de4fc1c614e551b4c24882bd4ddf60c7372a3f"} Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.890582 4865 generic.go:334] "Generic (PLEG): container finished" podID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerID="2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a" exitCode=0 Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.890818 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d8q5" event={"ID":"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e","Type":"ContainerDied","Data":"2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a"} Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.890877 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d8q5" event={"ID":"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e","Type":"ContainerStarted","Data":"4a07fec7163d70e698a0cff0fa278473085a61a26197d3aaaa4e48e07915fb98"} Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.893480 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm" event={"ID":"d8331842-a45a-4cbf-a55b-0d8dde7f69eb","Type":"ContainerStarted","Data":"3aa9b841485849f731239f6ae70f042127a7d8b60f50c5dd5ae86793121fe76d"} Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.897756 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" event={"ID":"92d92cd0-2f42-47e4-9b28-d3c3b596f818","Type":"ContainerDied","Data":"b732894e7f294dae4f775f2b17b54ea5ec7a41484ddfd89d1a5d86e48f7c3bc8"} Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.897773 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54d4b677c7-vrhzp" Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.905396 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw472" event={"ID":"bba9656c-fa5c-4675-a566-9ea549f1e553","Type":"ContainerStarted","Data":"77d511b2012162ae7a8601538990d38b64f1daeda018c87b7af061d2c39fb442"} Jan 23 12:09:16 crc kubenswrapper[4865]: I0123 12:09:16.925292 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8","Type":"ContainerStarted","Data":"faf657a776e029bf1d49c526d97b5a467bb97736eacb062f87e7df000fcf3c7b"} Jan 23 12:09:16 crc kubenswrapper[4865]: E0123 12:09:16.929297 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" podUID="15c39433-1f67-4c52-9a8f-df981b9af880" Jan 23 12:09:16 crc kubenswrapper[4865]: E0123 12:09:16.929366 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-neutron-server:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" podUID="45fe2611-a28b-49e2-9c01-b50eebc448dd" Jan 23 12:09:16 crc kubenswrapper[4865]: E0123 12:09:16.929366 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="813e1f0e-32d5-4237-8722-440164262885" Jan 23 12:09:16 crc kubenswrapper[4865]: E0123 12:09:16.929461 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-mariadb:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" Jan 23 12:09:16 crc kubenswrapper[4865]: E0123 12:09:16.930360 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-memcached:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/memcached-0" podUID="fc7b5e62-4146-4417-81e6-1ca9633eafe9" Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.121143 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54d4b677c7-vrhzp"] Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.127591 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54d4b677c7-vrhzp"] Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.368470 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.475035 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h27k8\" (UniqueName: \"kubernetes.io/projected/0b3b2411-8af8-487c-8906-f7b035e74ae0-kube-api-access-h27k8\") pod \"0b3b2411-8af8-487c-8906-f7b035e74ae0\" (UID: \"0b3b2411-8af8-487c-8906-f7b035e74ae0\") " Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.475166 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3b2411-8af8-487c-8906-f7b035e74ae0-config\") pod \"0b3b2411-8af8-487c-8906-f7b035e74ae0\" (UID: \"0b3b2411-8af8-487c-8906-f7b035e74ae0\") " Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.477487 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b3b2411-8af8-487c-8906-f7b035e74ae0-config" (OuterVolumeSpecName: "config") pod "0b3b2411-8af8-487c-8906-f7b035e74ae0" (UID: "0b3b2411-8af8-487c-8906-f7b035e74ae0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.487698 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3b2411-8af8-487c-8906-f7b035e74ae0-kube-api-access-h27k8" (OuterVolumeSpecName: "kube-api-access-h27k8") pod "0b3b2411-8af8-487c-8906-f7b035e74ae0" (UID: "0b3b2411-8af8-487c-8906-f7b035e74ae0"). InnerVolumeSpecName "kube-api-access-h27k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.580056 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h27k8\" (UniqueName: \"kubernetes.io/projected/0b3b2411-8af8-487c-8906-f7b035e74ae0-kube-api-access-h27k8\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.580312 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b3b2411-8af8-487c-8906-f7b035e74ae0-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.622828 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 23 12:09:17 crc kubenswrapper[4865]: W0123 12:09:17.629113 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00de89b6_ad6a_46bb_ac17_6be026277b26.slice/crio-53bf5a0bfd388020013205dd928b5d02f086eebc2e8546b73fd5591b09dededd WatchSource:0}: Error finding container 53bf5a0bfd388020013205dd928b5d02f086eebc2e8546b73fd5591b09dededd: Status 404 returned error can't find the container with id 53bf5a0bfd388020013205dd928b5d02f086eebc2e8546b73fd5591b09dededd Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.935907 4865 generic.go:334] "Generic (PLEG): container finished" podID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerID="65296672351ad963c6de8e54eb031f198b3983c441ff4331444c92f1a421c396" exitCode=0 Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.936413 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw472" event={"ID":"bba9656c-fa5c-4675-a566-9ea549f1e553","Type":"ContainerDied","Data":"65296672351ad963c6de8e54eb031f198b3983c441ff4331444c92f1a421c396"} Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.937524 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" event={"ID":"0b3b2411-8af8-487c-8906-f7b035e74ae0","Type":"ContainerDied","Data":"392e703b431d08b1c5fe29de3bffa127eb2d301dde187aeb381f9e6b483d1a25"} Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.937543 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dc8ff8c5-mzpvc" Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.939372 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d8q5" event={"ID":"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e","Type":"ContainerStarted","Data":"4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64"} Jan 23 12:09:17 crc kubenswrapper[4865]: I0123 12:09:17.943580 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"00de89b6-ad6a-46bb-ac17-6be026277b26","Type":"ContainerStarted","Data":"53bf5a0bfd388020013205dd928b5d02f086eebc2e8546b73fd5591b09dededd"} Jan 23 12:09:18 crc kubenswrapper[4865]: I0123 12:09:18.027665 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dc8ff8c5-mzpvc"] Jan 23 12:09:18 crc kubenswrapper[4865]: I0123 12:09:18.036711 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dc8ff8c5-mzpvc"] Jan 23 12:09:18 crc kubenswrapper[4865]: I0123 12:09:18.158392 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3b2411-8af8-487c-8906-f7b035e74ae0" path="/var/lib/kubelet/pods/0b3b2411-8af8-487c-8906-f7b035e74ae0/volumes" Jan 23 12:09:18 crc kubenswrapper[4865]: I0123 12:09:18.159179 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d92cd0-2f42-47e4-9b28-d3c3b596f818" path="/var/lib/kubelet/pods/92d92cd0-2f42-47e4-9b28-d3c3b596f818/volumes" Jan 23 12:09:18 crc kubenswrapper[4865]: I0123 12:09:18.776296 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:09:18 crc kubenswrapper[4865]: I0123 12:09:18.776367 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:09:19 crc kubenswrapper[4865]: I0123 12:09:19.957437 4865 generic.go:334] "Generic (PLEG): container finished" podID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerID="4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64" exitCode=0 Jan 23 12:09:19 crc kubenswrapper[4865]: I0123 12:09:19.957477 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d8q5" event={"ID":"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e","Type":"ContainerDied","Data":"4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64"} Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.438562 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-base:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.439233 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-base:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.439439 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-base:c3923531bcda0b0811b2d5053f189beb,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7bh58ch656h5dh58fh689h87h565hddh56bh588h5c8h66ch676h54fh9bh5d6h679h57dh85h59chddh564h79hb7hb6hcbh9dh546h574h5cbh66cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4slhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-spv64_openstack(11be7549-5b2b-49e9-b11e-7035922b3673): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.440806 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.782839 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-controller:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.782892 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-controller:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.783027 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-controller:c3923531bcda0b0811b2d5053f189beb,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7bh58ch656h5dh58fh689h87h565hddh56bh588h5c8h66ch676h54fh9bh5d6h679h57dh85h59chddh564h79hb7hb6hcbh9dh546h574h5cbh66cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mfcdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-hz4vm_openstack(d8331842-a45a-4cbf-a55b-0d8dde7f69eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.784243 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.971199 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-sb-db-server:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.971280 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-sb-db-server:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:09:28 crc kubenswrapper[4865]: E0123 12:09:28.971431 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-sb-db-server:c3923531bcda0b0811b2d5053f189beb,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n56ch599h587h57h565hd6hd4hf6h6fh64bh5c8h664hd8h85h687h5ffh5b4hd8h5cch5f5h578h7fh695h699h57dh57bh98h5dch659h5d8hcfh584q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vlnv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(87116cbc-3d44-4cde-a97e-ce1fe81f2cb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:09:29 crc kubenswrapper[4865]: E0123 12:09:29.040332 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-controller:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" Jan 23 12:09:29 crc kubenswrapper[4865]: E0123 12:09:29.059844 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-base:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" Jan 23 12:09:30 crc kubenswrapper[4865]: I0123 12:09:30.029204 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d8q5" event={"ID":"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e","Type":"ContainerStarted","Data":"527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667"} Jan 23 12:09:30 crc kubenswrapper[4865]: I0123 12:09:30.049178 4865 generic.go:334] "Generic (PLEG): container finished" podID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerID="855645d4184838e5f2c587055307bbe75d864b8edb4b9f0d78004cef4b6b44ac" exitCode=0 Jan 23 12:09:30 crc kubenswrapper[4865]: I0123 12:09:30.049234 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw472" event={"ID":"bba9656c-fa5c-4675-a566-9ea549f1e553","Type":"ContainerDied","Data":"855645d4184838e5f2c587055307bbe75d864b8edb4b9f0d78004cef4b6b44ac"} Jan 23 12:09:30 crc kubenswrapper[4865]: I0123 12:09:30.059713 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2d8q5" podStartSLOduration=39.905276979 podStartE2EDuration="52.059693251s" podCreationTimestamp="2026-01-23 12:08:38 +0000 UTC" firstStartedPulling="2026-01-23 12:09:16.892900896 +0000 UTC m=+1001.061973122" lastFinishedPulling="2026-01-23 12:09:29.047317168 +0000 UTC m=+1013.216389394" observedRunningTime="2026-01-23 12:09:30.048002944 +0000 UTC m=+1014.217075180" watchObservedRunningTime="2026-01-23 12:09:30.059693251 +0000 UTC m=+1014.228765477" Jan 23 12:09:30 crc kubenswrapper[4865]: E0123 12:09:30.281924 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15c39433_1f67_4c52_9a8f_df981b9af880.slice/crio-conmon-f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15c39433_1f67_4c52_9a8f_df981b9af880.slice/crio-f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7.scope\": RecentStats: unable to find data in memory cache]" Jan 23 12:09:31 crc kubenswrapper[4865]: I0123 12:09:31.058725 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5cf30925-0355-42db-9895-f23a97fca08e","Type":"ContainerStarted","Data":"873ff86016f31a2ec861f5265e0870aa36571f009c9c6c49afc71ee20831a5ab"} Jan 23 12:09:31 crc kubenswrapper[4865]: I0123 12:09:31.061146 4865 generic.go:334] "Generic (PLEG): container finished" podID="15c39433-1f67-4c52-9a8f-df981b9af880" containerID="f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7" exitCode=0 Jan 23 12:09:31 crc kubenswrapper[4865]: I0123 12:09:31.061187 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" event={"ID":"15c39433-1f67-4c52-9a8f-df981b9af880","Type":"ContainerDied","Data":"f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7"} Jan 23 12:09:32 crc kubenswrapper[4865]: I0123 12:09:32.069878 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"00de89b6-ad6a-46bb-ac17-6be026277b26","Type":"ContainerStarted","Data":"33cf2ebdcb708049bcb73bf5b64a7156aeb5bfca58bc9d097ce7277c01f116d8"} Jan 23 12:09:32 crc kubenswrapper[4865]: I0123 12:09:32.072365 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" event={"ID":"15c39433-1f67-4c52-9a8f-df981b9af880","Type":"ContainerStarted","Data":"f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4"} Jan 23 12:09:32 crc kubenswrapper[4865]: I0123 12:09:32.073385 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:09:32 crc kubenswrapper[4865]: I0123 12:09:32.075885 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"78884295-a3de-4e00-bcc4-6a1627b50717","Type":"ContainerStarted","Data":"1222d4ef3d8878fa911439a63d871b1e5f3216779270b77cd3d9fd29fae8b66a"} Jan 23 12:09:32 crc kubenswrapper[4865]: I0123 12:09:32.091997 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" podStartSLOduration=5.140082429 podStartE2EDuration="1m6.091979333s" podCreationTimestamp="2026-01-23 12:08:26 +0000 UTC" firstStartedPulling="2026-01-23 12:08:28.252922557 +0000 UTC m=+952.421994783" lastFinishedPulling="2026-01-23 12:09:29.204819461 +0000 UTC m=+1013.373891687" observedRunningTime="2026-01-23 12:09:32.087243567 +0000 UTC m=+1016.256315793" watchObservedRunningTime="2026-01-23 12:09:32.091979333 +0000 UTC m=+1016.261051559" Jan 23 12:09:34 crc kubenswrapper[4865]: I0123 12:09:34.091568 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ebb7983c-3aed-42f5-8635-8188f7abb9d5","Type":"ContainerStarted","Data":"fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8"} Jan 23 12:09:34 crc kubenswrapper[4865]: I0123 12:09:34.093723 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10a07490-f361-43e5-8d3e-a8bd917b3b84","Type":"ContainerStarted","Data":"5e4cca6ecb16f4a92d5899f94604c447d30034fd3d17d308d5dacb49f13a795c"} Jan 23 12:09:34 crc kubenswrapper[4865]: E0123 12:09:34.786976 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="87116cbc-3d44-4cde-a97e-ce1fe81f2cb8" Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.110970 4865 generic.go:334] "Generic (PLEG): container finished" podID="45fe2611-a28b-49e2-9c01-b50eebc448dd" containerID="76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba" exitCode=0 Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.111038 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" event={"ID":"45fe2611-a28b-49e2-9c01-b50eebc448dd","Type":"ContainerDied","Data":"76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba"} Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.122669 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"00de89b6-ad6a-46bb-ac17-6be026277b26","Type":"ContainerStarted","Data":"00a6316ac7efb8cc692cad1475a128574c75a3e21ddf99c5c2c2780b066e3c39"} Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.135547 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"fc7b5e62-4146-4417-81e6-1ca9633eafe9","Type":"ContainerStarted","Data":"3f121a12db222d4f8c4fd1e9992713e8bb4c40c2cbf7bb92129d06b1feba71f0"} Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.136685 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.145950 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw472" event={"ID":"bba9656c-fa5c-4675-a566-9ea549f1e553","Type":"ContainerStarted","Data":"7e2a67d413aa797327bdf48cb8698912b2015e31f975ce2c44590ee785650e5e"} Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.149343 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8","Type":"ContainerStarted","Data":"420d6612741e2c7ed4a312d2aeaae16f5bccc63530f749cd8fadaf1f52ba906b"} Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.153115 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813e1f0e-32d5-4237-8722-440164262885","Type":"ContainerStarted","Data":"13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279"} Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.153813 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.189323 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=42.373521925 podStartE2EDuration="59.189306592s" podCreationTimestamp="2026-01-23 12:08:36 +0000 UTC" firstStartedPulling="2026-01-23 12:09:17.634031355 +0000 UTC m=+1001.803103581" lastFinishedPulling="2026-01-23 12:09:34.449816022 +0000 UTC m=+1018.618888248" observedRunningTime="2026-01-23 12:09:35.164576536 +0000 UTC m=+1019.333648762" watchObservedRunningTime="2026-01-23 12:09:35.189306592 +0000 UTC m=+1019.358378818" Jan 23 12:09:35 crc kubenswrapper[4865]: E0123 12:09:35.201095 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-sb-db-server:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="87116cbc-3d44-4cde-a97e-ce1fe81f2cb8" Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.227011 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.564095204 podStartE2EDuration="1m3.226992196s" podCreationTimestamp="2026-01-23 12:08:32 +0000 UTC" firstStartedPulling="2026-01-23 12:08:33.781278711 +0000 UTC m=+957.950350937" lastFinishedPulling="2026-01-23 12:09:34.444175703 +0000 UTC m=+1018.613247929" observedRunningTime="2026-01-23 12:09:35.225976981 +0000 UTC m=+1019.395049207" watchObservedRunningTime="2026-01-23 12:09:35.226992196 +0000 UTC m=+1019.396064422" Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.232163 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hw472" podStartSLOduration=42.727868677000004 podStartE2EDuration="59.232147663s" podCreationTimestamp="2026-01-23 12:08:36 +0000 UTC" firstStartedPulling="2026-01-23 12:09:17.939825446 +0000 UTC m=+1002.108897672" lastFinishedPulling="2026-01-23 12:09:34.444104432 +0000 UTC m=+1018.613176658" observedRunningTime="2026-01-23 12:09:35.214004248 +0000 UTC m=+1019.383076474" watchObservedRunningTime="2026-01-23 12:09:35.232147663 +0000 UTC m=+1019.401219889" Jan 23 12:09:35 crc kubenswrapper[4865]: I0123 12:09:35.255916 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.53987318 podStartE2EDuration="1m5.255899436s" podCreationTimestamp="2026-01-23 12:08:30 +0000 UTC" firstStartedPulling="2026-01-23 12:08:31.724316033 +0000 UTC m=+955.893388259" lastFinishedPulling="2026-01-23 12:09:34.440342299 +0000 UTC m=+1018.609414515" observedRunningTime="2026-01-23 12:09:35.251465657 +0000 UTC m=+1019.420537883" watchObservedRunningTime="2026-01-23 12:09:35.255899436 +0000 UTC m=+1019.424971662" Jan 23 12:09:36 crc kubenswrapper[4865]: I0123 12:09:36.170807 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" event={"ID":"45fe2611-a28b-49e2-9c01-b50eebc448dd","Type":"ContainerStarted","Data":"d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4"} Jan 23 12:09:36 crc kubenswrapper[4865]: E0123 12:09:36.173146 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-ovn-sb-db-server:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="87116cbc-3d44-4cde-a97e-ce1fe81f2cb8" Jan 23 12:09:36 crc kubenswrapper[4865]: I0123 12:09:36.173510 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:09:36 crc kubenswrapper[4865]: I0123 12:09:36.215187 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" podStartSLOduration=-9223371966.639606 podStartE2EDuration="1m10.215168807s" podCreationTimestamp="2026-01-23 12:08:26 +0000 UTC" firstStartedPulling="2026-01-23 12:08:28.337706678 +0000 UTC m=+952.506778904" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:09:36.214942171 +0000 UTC m=+1020.384014397" watchObservedRunningTime="2026-01-23 12:09:36.215168807 +0000 UTC m=+1020.384241033" Jan 23 12:09:36 crc kubenswrapper[4865]: I0123 12:09:36.477174 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:09:36 crc kubenswrapper[4865]: I0123 12:09:36.478009 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:09:36 crc kubenswrapper[4865]: I0123 12:09:36.525111 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:09:36 crc kubenswrapper[4865]: I0123 12:09:36.779882 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:09:37 crc kubenswrapper[4865]: I0123 12:09:37.798661 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 23 12:09:37 crc kubenswrapper[4865]: I0123 12:09:37.799741 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 23 12:09:37 crc kubenswrapper[4865]: I0123 12:09:37.839118 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.185226 4865 generic.go:334] "Generic (PLEG): container finished" podID="5cf30925-0355-42db-9895-f23a97fca08e" containerID="873ff86016f31a2ec861f5265e0870aa36571f009c9c6c49afc71ee20831a5ab" exitCode=0 Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.185407 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5cf30925-0355-42db-9895-f23a97fca08e","Type":"ContainerDied","Data":"873ff86016f31a2ec861f5265e0870aa36571f009c9c6c49afc71ee20831a5ab"} Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.187802 4865 generic.go:334] "Generic (PLEG): container finished" podID="78884295-a3de-4e00-bcc4-6a1627b50717" containerID="1222d4ef3d8878fa911439a63d871b1e5f3216779270b77cd3d9fd29fae8b66a" exitCode=0 Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.188021 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"78884295-a3de-4e00-bcc4-6a1627b50717","Type":"ContainerDied","Data":"1222d4ef3d8878fa911439a63d871b1e5f3216779270b77cd3d9fd29fae8b66a"} Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.249410 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.535703 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d448fc49f-nkrhv"] Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.535905 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" podUID="45fe2611-a28b-49e2-9c01-b50eebc448dd" containerName="dnsmasq-dns" containerID="cri-o://d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4" gracePeriod=10 Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.584694 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7f8747cc-kbhqk"] Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.626341 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.632415 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.643071 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-v8c68"] Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.661464 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.663329 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7f8747cc-kbhqk"] Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.664861 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.672440 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-v8c68"] Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.699361 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r49rf\" (UniqueName: \"kubernetes.io/projected/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-kube-api-access-r49rf\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.699401 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-ovs-rundir\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.699432 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.699459 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-config\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.699495 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-config\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.699561 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-ovn-rundir\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.699592 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5llwf\" (UniqueName: \"kubernetes.io/projected/1d3103c8-528c-4190-ba73-b4954b9f998e-kube-api-access-5llwf\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.699662 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.699724 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-combined-ca-bundle\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.699770 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-dns-svc\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.801860 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-ovn-rundir\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.801916 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5llwf\" (UniqueName: \"kubernetes.io/projected/1d3103c8-528c-4190-ba73-b4954b9f998e-kube-api-access-5llwf\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.801940 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.801963 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-combined-ca-bundle\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.801981 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-dns-svc\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.802006 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r49rf\" (UniqueName: \"kubernetes.io/projected/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-kube-api-access-r49rf\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.802024 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-ovs-rundir\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.802046 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.802070 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-config\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.802103 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-config\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.802997 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-config\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.803219 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-ovn-rundir\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.803574 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-ovs-rundir\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.804012 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.804821 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-config\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.804991 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-dns-svc\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.807970 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.808256 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.808765 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.809177 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-combined-ca-bundle\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.823205 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r49rf\" (UniqueName: \"kubernetes.io/projected/d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6-kube-api-access-r49rf\") pod \"ovn-controller-metrics-v8c68\" (UID: \"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6\") " pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:38 crc kubenswrapper[4865]: I0123 12:09:38.824977 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5llwf\" (UniqueName: \"kubernetes.io/projected/1d3103c8-528c-4190-ba73-b4954b9f998e-kube-api-access-5llwf\") pod \"dnsmasq-dns-5d7f8747cc-kbhqk\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.001973 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.010943 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-v8c68" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.110970 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.126968 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f8747cc-kbhqk"] Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.221338 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wjwg\" (UniqueName: \"kubernetes.io/projected/45fe2611-a28b-49e2-9c01-b50eebc448dd-kube-api-access-4wjwg\") pod \"45fe2611-a28b-49e2-9c01-b50eebc448dd\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.221456 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-dns-svc\") pod \"45fe2611-a28b-49e2-9c01-b50eebc448dd\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.221475 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-config\") pod \"45fe2611-a28b-49e2-9c01-b50eebc448dd\" (UID: \"45fe2611-a28b-49e2-9c01-b50eebc448dd\") " Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.233517 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67cf479777-78s2l"] Jan 23 12:09:39 crc kubenswrapper[4865]: E0123 12:09:39.233932 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fe2611-a28b-49e2-9c01-b50eebc448dd" containerName="dnsmasq-dns" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.233945 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fe2611-a28b-49e2-9c01-b50eebc448dd" containerName="dnsmasq-dns" Jan 23 12:09:39 crc kubenswrapper[4865]: E0123 12:09:39.233953 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fe2611-a28b-49e2-9c01-b50eebc448dd" containerName="init" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.233959 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fe2611-a28b-49e2-9c01-b50eebc448dd" containerName="init" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.234122 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fe2611-a28b-49e2-9c01-b50eebc448dd" containerName="dnsmasq-dns" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.234912 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.239692 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.244994 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fe2611-a28b-49e2-9c01-b50eebc448dd-kube-api-access-4wjwg" (OuterVolumeSpecName: "kube-api-access-4wjwg") pod "45fe2611-a28b-49e2-9c01-b50eebc448dd" (UID: "45fe2611-a28b-49e2-9c01-b50eebc448dd"). InnerVolumeSpecName "kube-api-access-4wjwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.287392 4865 generic.go:334] "Generic (PLEG): container finished" podID="45fe2611-a28b-49e2-9c01-b50eebc448dd" containerID="d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4" exitCode=0 Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.287456 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" event={"ID":"45fe2611-a28b-49e2-9c01-b50eebc448dd","Type":"ContainerDied","Data":"d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4"} Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.287483 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" event={"ID":"45fe2611-a28b-49e2-9c01-b50eebc448dd","Type":"ContainerDied","Data":"b1b9eaf55c57946ddccc662f663d67ac759938bf83d10b9d11fa3b5b62e4f697"} Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.287499 4865 scope.go:117] "RemoveContainer" containerID="d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.287644 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d448fc49f-nkrhv" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.293972 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"78884295-a3de-4e00-bcc4-6a1627b50717","Type":"ContainerStarted","Data":"50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d"} Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.305707 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cf479777-78s2l"] Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.323652 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5cf30925-0355-42db-9895-f23a97fca08e","Type":"ContainerStarted","Data":"7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3"} Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.323930 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-config\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.323968 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-nb\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.324654 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54tkp\" (UniqueName: \"kubernetes.io/projected/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-kube-api-access-54tkp\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.324712 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-dns-svc\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.324741 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-sb\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.324799 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wjwg\" (UniqueName: \"kubernetes.io/projected/45fe2611-a28b-49e2-9c01-b50eebc448dd-kube-api-access-4wjwg\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.348586 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-config" (OuterVolumeSpecName: "config") pod "45fe2611-a28b-49e2-9c01-b50eebc448dd" (UID: "45fe2611-a28b-49e2-9c01-b50eebc448dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.362411 4865 scope.go:117] "RemoveContainer" containerID="76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.373557 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371965.481236 podStartE2EDuration="1m11.373541063s" podCreationTimestamp="2026-01-23 12:08:28 +0000 UTC" firstStartedPulling="2026-01-23 12:08:31.006829842 +0000 UTC m=+955.175902068" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:09:39.347532945 +0000 UTC m=+1023.516605191" watchObservedRunningTime="2026-01-23 12:09:39.373541063 +0000 UTC m=+1023.542613289" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.403787 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=12.705248688 podStartE2EDuration="1m10.403772944s" podCreationTimestamp="2026-01-23 12:08:29 +0000 UTC" firstStartedPulling="2026-01-23 12:08:31.487820352 +0000 UTC m=+955.656892578" lastFinishedPulling="2026-01-23 12:09:29.186344608 +0000 UTC m=+1013.355416834" observedRunningTime="2026-01-23 12:09:39.403066718 +0000 UTC m=+1023.572138944" watchObservedRunningTime="2026-01-23 12:09:39.403772944 +0000 UTC m=+1023.572845170" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.409073 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45fe2611-a28b-49e2-9c01-b50eebc448dd" (UID: "45fe2611-a28b-49e2-9c01-b50eebc448dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.425978 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54tkp\" (UniqueName: \"kubernetes.io/projected/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-kube-api-access-54tkp\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.440794 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-dns-svc\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.440892 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-sb\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.440983 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-config\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.441024 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-nb\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.441516 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.441532 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45fe2611-a28b-49e2-9c01-b50eebc448dd-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.444543 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-dns-svc\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.445723 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-sb\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.448423 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-config\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.450398 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54tkp\" (UniqueName: \"kubernetes.io/projected/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-kube-api-access-54tkp\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.459748 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-nb\") pod \"dnsmasq-dns-67cf479777-78s2l\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.460050 4865 scope.go:117] "RemoveContainer" containerID="d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4" Jan 23 12:09:39 crc kubenswrapper[4865]: E0123 12:09:39.461251 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4\": container with ID starting with d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4 not found: ID does not exist" containerID="d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.461275 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4"} err="failed to get container status \"d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4\": rpc error: code = NotFound desc = could not find container \"d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4\": container with ID starting with d86b32ad6d2cb1a11279f9d63a20b8d3fbf0e7ec65033ba9bfa4b5d006d323d4 not found: ID does not exist" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.461294 4865 scope.go:117] "RemoveContainer" containerID="76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba" Jan 23 12:09:39 crc kubenswrapper[4865]: E0123 12:09:39.461589 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba\": container with ID starting with 76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba not found: ID does not exist" containerID="76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.461647 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba"} err="failed to get container status \"76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba\": rpc error: code = NotFound desc = could not find container \"76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba\": container with ID starting with 76cdfe86567964e087341a85a56d67b489cb5d55c2f81957a0d82937994fd1ba not found: ID does not exist" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.630524 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d448fc49f-nkrhv"] Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.638636 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d448fc49f-nkrhv"] Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.648512 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f8747cc-kbhqk"] Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.728549 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.736221 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-v8c68"] Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.752034 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.752071 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 12:09:39 crc kubenswrapper[4865]: I0123 12:09:39.869367 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2d8q5" podUID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerName="registry-server" probeResult="failure" output=< Jan 23 12:09:39 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:09:39 crc kubenswrapper[4865]: > Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.128189 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45fe2611-a28b-49e2-9c01-b50eebc448dd" path="/var/lib/kubelet/pods/45fe2611-a28b-49e2-9c01-b50eebc448dd/volumes" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.230700 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cf479777-78s2l"] Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.338741 4865 generic.go:334] "Generic (PLEG): container finished" podID="1d3103c8-528c-4190-ba73-b4954b9f998e" containerID="bf7aff84bf5e7533b4ffa156ede34588e909d5a3c57a4a78674ba11bdf7e4751" exitCode=0 Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.338816 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" event={"ID":"1d3103c8-528c-4190-ba73-b4954b9f998e","Type":"ContainerDied","Data":"bf7aff84bf5e7533b4ffa156ede34588e909d5a3c57a4a78674ba11bdf7e4751"} Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.338846 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" event={"ID":"1d3103c8-528c-4190-ba73-b4954b9f998e","Type":"ContainerStarted","Data":"13ec10a1f41894a8d2ca663779487f55423c6a92a239bcd65fa76e63bc23fc60"} Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.341174 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cf479777-78s2l" event={"ID":"aea4140c-3cae-4cb4-bd2a-a78b6ef98651","Type":"ContainerStarted","Data":"06660926a8925e2f037c5ffc1df913ef25a22de9e567617f88d170fba6208dfc"} Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.345951 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-v8c68" event={"ID":"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6","Type":"ContainerStarted","Data":"9fa76425a5cc69b0890af384f07548792d2b88a7b696a0f3d43121931ac90e65"} Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.345998 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-v8c68" event={"ID":"d0d46a40-6942-4f3d-a85e-71ebdfeb5eb6","Type":"ContainerStarted","Data":"96adb9ce338c1962887691c811ef45d78adff812da01584fd086622a180be84d"} Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.395690 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-v8c68" podStartSLOduration=2.395672246 podStartE2EDuration="2.395672246s" podCreationTimestamp="2026-01-23 12:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:09:40.392105659 +0000 UTC m=+1024.561177885" watchObservedRunningTime="2026-01-23 12:09:40.395672246 +0000 UTC m=+1024.564744472" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.653806 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.697821 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5llwf\" (UniqueName: \"kubernetes.io/projected/1d3103c8-528c-4190-ba73-b4954b9f998e-kube-api-access-5llwf\") pod \"1d3103c8-528c-4190-ba73-b4954b9f998e\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.697891 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-ovsdbserver-nb\") pod \"1d3103c8-528c-4190-ba73-b4954b9f998e\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.697933 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-dns-svc\") pod \"1d3103c8-528c-4190-ba73-b4954b9f998e\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.698041 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-config\") pod \"1d3103c8-528c-4190-ba73-b4954b9f998e\" (UID: \"1d3103c8-528c-4190-ba73-b4954b9f998e\") " Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.705860 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3103c8-528c-4190-ba73-b4954b9f998e-kube-api-access-5llwf" (OuterVolumeSpecName: "kube-api-access-5llwf") pod "1d3103c8-528c-4190-ba73-b4954b9f998e" (UID: "1d3103c8-528c-4190-ba73-b4954b9f998e"). InnerVolumeSpecName "kube-api-access-5llwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.723509 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-config" (OuterVolumeSpecName: "config") pod "1d3103c8-528c-4190-ba73-b4954b9f998e" (UID: "1d3103c8-528c-4190-ba73-b4954b9f998e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.724024 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1d3103c8-528c-4190-ba73-b4954b9f998e" (UID: "1d3103c8-528c-4190-ba73-b4954b9f998e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.724490 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1d3103c8-528c-4190-ba73-b4954b9f998e" (UID: "1d3103c8-528c-4190-ba73-b4954b9f998e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.777375 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.777436 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.799561 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.799586 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5llwf\" (UniqueName: \"kubernetes.io/projected/1d3103c8-528c-4190-ba73-b4954b9f998e-kube-api-access-5llwf\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.799610 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:40 crc kubenswrapper[4865]: I0123 12:09:40.799620 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1d3103c8-528c-4190-ba73-b4954b9f998e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:41 crc kubenswrapper[4865]: I0123 12:09:41.167790 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 23 12:09:41 crc kubenswrapper[4865]: I0123 12:09:41.362080 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" event={"ID":"1d3103c8-528c-4190-ba73-b4954b9f998e","Type":"ContainerDied","Data":"13ec10a1f41894a8d2ca663779487f55423c6a92a239bcd65fa76e63bc23fc60"} Jan 23 12:09:41 crc kubenswrapper[4865]: I0123 12:09:41.362131 4865 scope.go:117] "RemoveContainer" containerID="bf7aff84bf5e7533b4ffa156ede34588e909d5a3c57a4a78674ba11bdf7e4751" Jan 23 12:09:41 crc kubenswrapper[4865]: I0123 12:09:41.362261 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f8747cc-kbhqk" Jan 23 12:09:41 crc kubenswrapper[4865]: I0123 12:09:41.382127 4865 generic.go:334] "Generic (PLEG): container finished" podID="aea4140c-3cae-4cb4-bd2a-a78b6ef98651" containerID="349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd" exitCode=0 Jan 23 12:09:41 crc kubenswrapper[4865]: I0123 12:09:41.383019 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cf479777-78s2l" event={"ID":"aea4140c-3cae-4cb4-bd2a-a78b6ef98651","Type":"ContainerDied","Data":"349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd"} Jan 23 12:09:41 crc kubenswrapper[4865]: I0123 12:09:41.489570 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f8747cc-kbhqk"] Jan 23 12:09:41 crc kubenswrapper[4865]: I0123 12:09:41.541473 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7f8747cc-kbhqk"] Jan 23 12:09:42 crc kubenswrapper[4865]: I0123 12:09:42.127626 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3103c8-528c-4190-ba73-b4954b9f998e" path="/var/lib/kubelet/pods/1d3103c8-528c-4190-ba73-b4954b9f998e/volumes" Jan 23 12:09:42 crc kubenswrapper[4865]: I0123 12:09:42.390789 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm" event={"ID":"d8331842-a45a-4cbf-a55b-0d8dde7f69eb","Type":"ContainerStarted","Data":"fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8"} Jan 23 12:09:42 crc kubenswrapper[4865]: I0123 12:09:42.391958 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hz4vm" Jan 23 12:09:42 crc kubenswrapper[4865]: I0123 12:09:42.392756 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cf479777-78s2l" event={"ID":"aea4140c-3cae-4cb4-bd2a-a78b6ef98651","Type":"ContainerStarted","Data":"1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee"} Jan 23 12:09:42 crc kubenswrapper[4865]: I0123 12:09:42.393587 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:42 crc kubenswrapper[4865]: I0123 12:09:42.425643 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hz4vm" podStartSLOduration=41.875722076 podStartE2EDuration="1m6.425621912s" podCreationTimestamp="2026-01-23 12:08:36 +0000 UTC" firstStartedPulling="2026-01-23 12:09:16.663019657 +0000 UTC m=+1000.832091883" lastFinishedPulling="2026-01-23 12:09:41.212919493 +0000 UTC m=+1025.381991719" observedRunningTime="2026-01-23 12:09:42.416857477 +0000 UTC m=+1026.585929703" watchObservedRunningTime="2026-01-23 12:09:42.425621912 +0000 UTC m=+1026.594694138" Jan 23 12:09:43 crc kubenswrapper[4865]: I0123 12:09:43.146432 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67cf479777-78s2l" podStartSLOduration=4.146399712 podStartE2EDuration="4.146399712s" podCreationTimestamp="2026-01-23 12:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:09:42.44836797 +0000 UTC m=+1026.617440186" watchObservedRunningTime="2026-01-23 12:09:43.146399712 +0000 UTC m=+1027.315471938" Jan 23 12:09:43 crc kubenswrapper[4865]: I0123 12:09:43.182251 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.219830 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67cf479777-78s2l"] Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.227174 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.336807 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d5cd98cbf-9gkpm"] Jan 23 12:09:44 crc kubenswrapper[4865]: E0123 12:09:44.337320 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3103c8-528c-4190-ba73-b4954b9f998e" containerName="init" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.337365 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3103c8-528c-4190-ba73-b4954b9f998e" containerName="init" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.337613 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3103c8-528c-4190-ba73-b4954b9f998e" containerName="init" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.338792 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.366314 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d5cd98cbf-9gkpm"] Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.401429 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-nb\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.401525 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-dns-svc\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.401950 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-config\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.402103 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-sb\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.402216 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kqvv\" (UniqueName: \"kubernetes.io/projected/613af722-ab75-4bff-a8b4-e7a19792c417-kube-api-access-4kqvv\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.410741 4865 generic.go:334] "Generic (PLEG): container finished" podID="11be7549-5b2b-49e9-b11e-7035922b3673" containerID="a98fbec13ff5bb6fd6218f7a07181605fdf742411669d79c32f0d6899ec12701" exitCode=0 Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.410796 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-spv64" event={"ID":"11be7549-5b2b-49e9-b11e-7035922b3673","Type":"ContainerDied","Data":"a98fbec13ff5bb6fd6218f7a07181605fdf742411669d79c32f0d6899ec12701"} Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.411029 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67cf479777-78s2l" podUID="aea4140c-3cae-4cb4-bd2a-a78b6ef98651" containerName="dnsmasq-dns" containerID="cri-o://1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee" gracePeriod=10 Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.423983 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.506875 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kqvv\" (UniqueName: \"kubernetes.io/projected/613af722-ab75-4bff-a8b4-e7a19792c417-kube-api-access-4kqvv\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.506975 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-nb\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.507034 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-dns-svc\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.507111 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-config\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.507141 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-sb\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.508442 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-sb\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.509178 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-dns-svc\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.510385 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-nb\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.511551 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-config\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.570412 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kqvv\" (UniqueName: \"kubernetes.io/projected/613af722-ab75-4bff-a8b4-e7a19792c417-kube-api-access-4kqvv\") pod \"dnsmasq-dns-5d5cd98cbf-9gkpm\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:44 crc kubenswrapper[4865]: I0123 12:09:44.663209 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.226297 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.343882 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54tkp\" (UniqueName: \"kubernetes.io/projected/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-kube-api-access-54tkp\") pod \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.343918 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-dns-svc\") pod \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.343948 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-config\") pod \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.344012 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-nb\") pod \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.344049 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-sb\") pod \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\" (UID: \"aea4140c-3cae-4cb4-bd2a-a78b6ef98651\") " Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.349012 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-kube-api-access-54tkp" (OuterVolumeSpecName: "kube-api-access-54tkp") pod "aea4140c-3cae-4cb4-bd2a-a78b6ef98651" (UID: "aea4140c-3cae-4cb4-bd2a-a78b6ef98651"). InnerVolumeSpecName "kube-api-access-54tkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.441615 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-config" (OuterVolumeSpecName: "config") pod "aea4140c-3cae-4cb4-bd2a-a78b6ef98651" (UID: "aea4140c-3cae-4cb4-bd2a-a78b6ef98651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.445734 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.445782 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54tkp\" (UniqueName: \"kubernetes.io/projected/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-kube-api-access-54tkp\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.452884 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aea4140c-3cae-4cb4-bd2a-a78b6ef98651" (UID: "aea4140c-3cae-4cb4-bd2a-a78b6ef98651"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.454543 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-spv64" event={"ID":"11be7549-5b2b-49e9-b11e-7035922b3673","Type":"ContainerStarted","Data":"bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c"} Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.463720 4865 generic.go:334] "Generic (PLEG): container finished" podID="aea4140c-3cae-4cb4-bd2a-a78b6ef98651" containerID="1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee" exitCode=0 Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.463760 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cf479777-78s2l" event={"ID":"aea4140c-3cae-4cb4-bd2a-a78b6ef98651","Type":"ContainerDied","Data":"1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee"} Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.463790 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cf479777-78s2l" event={"ID":"aea4140c-3cae-4cb4-bd2a-a78b6ef98651","Type":"ContainerDied","Data":"06660926a8925e2f037c5ffc1df913ef25a22de9e567617f88d170fba6208dfc"} Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.463811 4865 scope.go:117] "RemoveContainer" containerID="1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.463982 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cf479777-78s2l" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.467176 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 23 12:09:45 crc kubenswrapper[4865]: E0123 12:09:45.467524 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea4140c-3cae-4cb4-bd2a-a78b6ef98651" containerName="dnsmasq-dns" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.467544 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea4140c-3cae-4cb4-bd2a-a78b6ef98651" containerName="dnsmasq-dns" Jan 23 12:09:45 crc kubenswrapper[4865]: E0123 12:09:45.467580 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea4140c-3cae-4cb4-bd2a-a78b6ef98651" containerName="init" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.467586 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea4140c-3cae-4cb4-bd2a-a78b6ef98651" containerName="init" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.468089 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea4140c-3cae-4cb4-bd2a-a78b6ef98651" containerName="dnsmasq-dns" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.468552 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aea4140c-3cae-4cb4-bd2a-a78b6ef98651" (UID: "aea4140c-3cae-4cb4-bd2a-a78b6ef98651"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.475679 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.477476 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aea4140c-3cae-4cb4-bd2a-a78b6ef98651" (UID: "aea4140c-3cae-4cb4-bd2a-a78b6ef98651"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.477696 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.480007 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.480253 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-m5pb4" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.480390 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.534245 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.540893 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d5cd98cbf-9gkpm"] Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.549703 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.549729 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.549738 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aea4140c-3cae-4cb4-bd2a-a78b6ef98651-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.650956 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/01f82f85-33db-4f45-97c6-84f6dd7689c8-lock\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.651238 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/01f82f85-33db-4f45-97c6-84f6dd7689c8-cache\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.653222 4865 scope.go:117] "RemoveContainer" containerID="349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.654675 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.654707 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.654826 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f82f85-33db-4f45-97c6-84f6dd7689c8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.654922 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv9dk\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-kube-api-access-kv9dk\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.719063 4865 scope.go:117] "RemoveContainer" containerID="1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee" Jan 23 12:09:45 crc kubenswrapper[4865]: E0123 12:09:45.719462 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee\": container with ID starting with 1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee not found: ID does not exist" containerID="1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.719492 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee"} err="failed to get container status \"1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee\": rpc error: code = NotFound desc = could not find container \"1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee\": container with ID starting with 1bbfbaa76bd19065cb905bfce0d825fd9077c9b7dbe27ab9fa1e90f856ec21ee not found: ID does not exist" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.719513 4865 scope.go:117] "RemoveContainer" containerID="349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd" Jan 23 12:09:45 crc kubenswrapper[4865]: E0123 12:09:45.719881 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd\": container with ID starting with 349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd not found: ID does not exist" containerID="349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.719901 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd"} err="failed to get container status \"349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd\": rpc error: code = NotFound desc = could not find container \"349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd\": container with ID starting with 349b728b69fc9c690dab2b5b47a9b577a0029f142f86feaaf10b9c50eec40bbd not found: ID does not exist" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.756768 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/01f82f85-33db-4f45-97c6-84f6dd7689c8-lock\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.756814 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/01f82f85-33db-4f45-97c6-84f6dd7689c8-cache\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.756831 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.756847 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.756882 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f82f85-33db-4f45-97c6-84f6dd7689c8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.756919 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv9dk\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-kube-api-access-kv9dk\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.757197 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/01f82f85-33db-4f45-97c6-84f6dd7689c8-lock\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.757269 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/01f82f85-33db-4f45-97c6-84f6dd7689c8-cache\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: E0123 12:09:45.757288 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 12:09:45 crc kubenswrapper[4865]: E0123 12:09:45.757330 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 12:09:45 crc kubenswrapper[4865]: E0123 12:09:45.757373 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift podName:01f82f85-33db-4f45-97c6-84f6dd7689c8 nodeName:}" failed. No retries permitted until 2026-01-23 12:09:46.257357261 +0000 UTC m=+1030.426429487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift") pod "swift-storage-0" (UID: "01f82f85-33db-4f45-97c6-84f6dd7689c8") : configmap "swift-ring-files" not found Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.757429 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.765954 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f82f85-33db-4f45-97c6-84f6dd7689c8-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.779367 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv9dk\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-kube-api-access-kv9dk\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.806489 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.861211 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67cf479777-78s2l"] Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.873996 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67cf479777-78s2l"] Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.886990 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 12:09:45 crc kubenswrapper[4865]: I0123 12:09:45.998874 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.133449 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea4140c-3cae-4cb4-bd2a-a78b6ef98651" path="/var/lib/kubelet/pods/aea4140c-3cae-4cb4-bd2a-a78b6ef98651/volumes" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.273895 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:46 crc kubenswrapper[4865]: E0123 12:09:46.274002 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 12:09:46 crc kubenswrapper[4865]: E0123 12:09:46.274480 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 12:09:46 crc kubenswrapper[4865]: E0123 12:09:46.274530 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift podName:01f82f85-33db-4f45-97c6-84f6dd7689c8 nodeName:}" failed. No retries permitted until 2026-01-23 12:09:47.274509636 +0000 UTC m=+1031.443581882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift") pod "swift-storage-0" (UID: "01f82f85-33db-4f45-97c6-84f6dd7689c8") : configmap "swift-ring-files" not found Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.331457 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-cxl62"] Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.333065 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.335413 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.335667 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.337147 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.347127 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cxl62"] Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.489579 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-scripts\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.489819 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-ring-data-devices\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.489877 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-swiftconf\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.489896 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-etc-swift\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.489969 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-dispersionconf\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.489986 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-combined-ca-bundle\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.490011 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbtvz\" (UniqueName: \"kubernetes.io/projected/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-kube-api-access-nbtvz\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.503700 4865 generic.go:334] "Generic (PLEG): container finished" podID="613af722-ab75-4bff-a8b4-e7a19792c417" containerID="8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d" exitCode=0 Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.503804 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" event={"ID":"613af722-ab75-4bff-a8b4-e7a19792c417","Type":"ContainerDied","Data":"8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d"} Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.503971 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" event={"ID":"613af722-ab75-4bff-a8b4-e7a19792c417","Type":"ContainerStarted","Data":"dbb2a02a5aa2a3879147ace6f2f2f0713c0d6eb69bf883d10a357e3e98912600"} Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.537539 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-spv64" event={"ID":"11be7549-5b2b-49e9-b11e-7035922b3673","Type":"ContainerStarted","Data":"f6ffc0f8bb728875317f8e310c556dd93bdcb18b357e4d537507f558b0e947dc"} Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.537941 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.538029 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.579830 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-spv64" podStartSLOduration=44.238915495 podStartE2EDuration="1m10.579806466s" podCreationTimestamp="2026-01-23 12:08:36 +0000 UTC" firstStartedPulling="2026-01-23 12:09:16.849509141 +0000 UTC m=+1001.018581367" lastFinishedPulling="2026-01-23 12:09:43.190400112 +0000 UTC m=+1027.359472338" observedRunningTime="2026-01-23 12:09:46.569116684 +0000 UTC m=+1030.738188910" watchObservedRunningTime="2026-01-23 12:09:46.579806466 +0000 UTC m=+1030.748878692" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.591514 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-dispersionconf\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.591557 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-combined-ca-bundle\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.591613 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbtvz\" (UniqueName: \"kubernetes.io/projected/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-kube-api-access-nbtvz\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.591640 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-scripts\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.591685 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-ring-data-devices\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.591742 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-swiftconf\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.591757 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-etc-swift\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.593490 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-ring-data-devices\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.594622 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-scripts\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.594973 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-etc-swift\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.597556 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.600322 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-dispersionconf\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.602737 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-swiftconf\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.611200 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-combined-ca-bundle\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.616164 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbtvz\" (UniqueName: \"kubernetes.io/projected/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-kube-api-access-nbtvz\") pod \"swift-ring-rebalance-cxl62\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.659193 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:09:46 crc kubenswrapper[4865]: I0123 12:09:46.663849 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hw472"] Jan 23 12:09:47 crc kubenswrapper[4865]: I0123 12:09:47.101502 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-cxl62"] Jan 23 12:09:47 crc kubenswrapper[4865]: W0123 12:09:47.109659 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7e81618_9835_4b8e_a9fc_4e2506ea7ed8.slice/crio-1d0f4e37d787585c0dce1d313be75c018e8d2cc232ea6d3ecf87443e59f265d7 WatchSource:0}: Error finding container 1d0f4e37d787585c0dce1d313be75c018e8d2cc232ea6d3ecf87443e59f265d7: Status 404 returned error can't find the container with id 1d0f4e37d787585c0dce1d313be75c018e8d2cc232ea6d3ecf87443e59f265d7 Jan 23 12:09:47 crc kubenswrapper[4865]: I0123 12:09:47.305260 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:47 crc kubenswrapper[4865]: E0123 12:09:47.305661 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 12:09:47 crc kubenswrapper[4865]: E0123 12:09:47.305692 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 12:09:47 crc kubenswrapper[4865]: E0123 12:09:47.305746 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift podName:01f82f85-33db-4f45-97c6-84f6dd7689c8 nodeName:}" failed. No retries permitted until 2026-01-23 12:09:49.305727573 +0000 UTC m=+1033.474799809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift") pod "swift-storage-0" (UID: "01f82f85-33db-4f45-97c6-84f6dd7689c8") : configmap "swift-ring-files" not found Jan 23 12:09:47 crc kubenswrapper[4865]: I0123 12:09:47.543479 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cxl62" event={"ID":"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8","Type":"ContainerStarted","Data":"1d0f4e37d787585c0dce1d313be75c018e8d2cc232ea6d3ecf87443e59f265d7"} Jan 23 12:09:47 crc kubenswrapper[4865]: I0123 12:09:47.546298 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" event={"ID":"613af722-ab75-4bff-a8b4-e7a19792c417","Type":"ContainerStarted","Data":"2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa"} Jan 23 12:09:47 crc kubenswrapper[4865]: I0123 12:09:47.546982 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hw472" podUID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerName="registry-server" containerID="cri-o://7e2a67d413aa797327bdf48cb8698912b2015e31f975ce2c44590ee785650e5e" gracePeriod=2 Jan 23 12:09:47 crc kubenswrapper[4865]: I0123 12:09:47.573613 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" podStartSLOduration=3.573577573 podStartE2EDuration="3.573577573s" podCreationTimestamp="2026-01-23 12:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:09:47.566045249 +0000 UTC m=+1031.735117475" watchObservedRunningTime="2026-01-23 12:09:47.573577573 +0000 UTC m=+1031.742649799" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.417773 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-7jhmw"] Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.419444 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7jhmw" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.423036 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.426392 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7jhmw"] Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.524268 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-operator-scripts\") pod \"root-account-create-update-7jhmw\" (UID: \"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50\") " pod="openstack/root-account-create-update-7jhmw" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.524356 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qthkj\" (UniqueName: \"kubernetes.io/projected/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-kube-api-access-qthkj\") pod \"root-account-create-update-7jhmw\" (UID: \"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50\") " pod="openstack/root-account-create-update-7jhmw" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.555695 4865 generic.go:334] "Generic (PLEG): container finished" podID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerID="7e2a67d413aa797327bdf48cb8698912b2015e31f975ce2c44590ee785650e5e" exitCode=0 Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.555787 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw472" event={"ID":"bba9656c-fa5c-4675-a566-9ea549f1e553","Type":"ContainerDied","Data":"7e2a67d413aa797327bdf48cb8698912b2015e31f975ce2c44590ee785650e5e"} Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.556231 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.625832 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-operator-scripts\") pod \"root-account-create-update-7jhmw\" (UID: \"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50\") " pod="openstack/root-account-create-update-7jhmw" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.626468 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qthkj\" (UniqueName: \"kubernetes.io/projected/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-kube-api-access-qthkj\") pod \"root-account-create-update-7jhmw\" (UID: \"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50\") " pod="openstack/root-account-create-update-7jhmw" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.627390 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-operator-scripts\") pod \"root-account-create-update-7jhmw\" (UID: \"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50\") " pod="openstack/root-account-create-update-7jhmw" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.646076 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qthkj\" (UniqueName: \"kubernetes.io/projected/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-kube-api-access-qthkj\") pod \"root-account-create-update-7jhmw\" (UID: \"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50\") " pod="openstack/root-account-create-update-7jhmw" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.737060 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7jhmw" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.776083 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.776128 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.873252 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.922993 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:09:48 crc kubenswrapper[4865]: I0123 12:09:48.944157 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.035980 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-utilities\") pod \"bba9656c-fa5c-4675-a566-9ea549f1e553\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.036122 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-catalog-content\") pod \"bba9656c-fa5c-4675-a566-9ea549f1e553\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.036152 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx8pb\" (UniqueName: \"kubernetes.io/projected/bba9656c-fa5c-4675-a566-9ea549f1e553-kube-api-access-lx8pb\") pod \"bba9656c-fa5c-4675-a566-9ea549f1e553\" (UID: \"bba9656c-fa5c-4675-a566-9ea549f1e553\") " Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.036860 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-utilities" (OuterVolumeSpecName: "utilities") pod "bba9656c-fa5c-4675-a566-9ea549f1e553" (UID: "bba9656c-fa5c-4675-a566-9ea549f1e553"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.040504 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba9656c-fa5c-4675-a566-9ea549f1e553-kube-api-access-lx8pb" (OuterVolumeSpecName: "kube-api-access-lx8pb") pod "bba9656c-fa5c-4675-a566-9ea549f1e553" (UID: "bba9656c-fa5c-4675-a566-9ea549f1e553"). InnerVolumeSpecName "kube-api-access-lx8pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.076833 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bba9656c-fa5c-4675-a566-9ea549f1e553" (UID: "bba9656c-fa5c-4675-a566-9ea549f1e553"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.137641 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.137669 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba9656c-fa5c-4675-a566-9ea549f1e553-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.137681 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx8pb\" (UniqueName: \"kubernetes.io/projected/bba9656c-fa5c-4675-a566-9ea549f1e553-kube-api-access-lx8pb\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.340326 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:49 crc kubenswrapper[4865]: E0123 12:09:49.342742 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 12:09:49 crc kubenswrapper[4865]: E0123 12:09:49.342770 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 12:09:49 crc kubenswrapper[4865]: E0123 12:09:49.342824 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift podName:01f82f85-33db-4f45-97c6-84f6dd7689c8 nodeName:}" failed. No retries permitted until 2026-01-23 12:09:53.342808814 +0000 UTC m=+1037.511881040 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift") pod "swift-storage-0" (UID: "01f82f85-33db-4f45-97c6-84f6dd7689c8") : configmap "swift-ring-files" not found Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.436866 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2d8q5"] Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.568299 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hw472" Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.568467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hw472" event={"ID":"bba9656c-fa5c-4675-a566-9ea549f1e553","Type":"ContainerDied","Data":"77d511b2012162ae7a8601538990d38b64f1daeda018c87b7af061d2c39fb442"} Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.569225 4865 scope.go:117] "RemoveContainer" containerID="7e2a67d413aa797327bdf48cb8698912b2015e31f975ce2c44590ee785650e5e" Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.612765 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hw472"] Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.622324 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hw472"] Jan 23 12:09:49 crc kubenswrapper[4865]: I0123 12:09:49.997581 4865 scope.go:117] "RemoveContainer" containerID="855645d4184838e5f2c587055307bbe75d864b8edb4b9f0d78004cef4b6b44ac" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.063570 4865 scope.go:117] "RemoveContainer" containerID="65296672351ad963c6de8e54eb031f198b3983c441ff4331444c92f1a421c396" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.146192 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bba9656c-fa5c-4675-a566-9ea549f1e553" path="/var/lib/kubelet/pods/bba9656c-fa5c-4675-a566-9ea549f1e553/volumes" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.470336 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-7jhmw"] Jan 23 12:09:50 crc kubenswrapper[4865]: W0123 12:09:50.479896 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b32b96a_bdd8_48f1_9f0a_891e76d0cd50.slice/crio-65bf5955c00847ff8ab4f3baff7815b44c737dbda517cf37a0c937d937fc0f88 WatchSource:0}: Error finding container 65bf5955c00847ff8ab4f3baff7815b44c737dbda517cf37a0c937d937fc0f88: Status 404 returned error can't find the container with id 65bf5955c00847ff8ab4f3baff7815b44c737dbda517cf37a0c937d937fc0f88 Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.578695 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7jhmw" event={"ID":"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50","Type":"ContainerStarted","Data":"65bf5955c00847ff8ab4f3baff7815b44c737dbda517cf37a0c937d937fc0f88"} Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.579736 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cxl62" event={"ID":"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8","Type":"ContainerStarted","Data":"e3def9ef2bd9b1a613c3b5dce12713d659402200fcabccbd2528eb3c4ee7e095"} Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.583086 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2d8q5" podUID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerName="registry-server" containerID="cri-o://527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667" gracePeriod=2 Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.583292 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"87116cbc-3d44-4cde-a97e-ce1fe81f2cb8","Type":"ContainerStarted","Data":"6064d54733436a9f3723851eeef4f9725be3780ecb71435963648d4e543cd5ec"} Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.597440 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-cxl62" podStartSLOduration=1.644136104 podStartE2EDuration="4.59742364s" podCreationTimestamp="2026-01-23 12:09:46 +0000 UTC" firstStartedPulling="2026-01-23 12:09:47.111659922 +0000 UTC m=+1031.280732148" lastFinishedPulling="2026-01-23 12:09:50.064947458 +0000 UTC m=+1034.234019684" observedRunningTime="2026-01-23 12:09:50.593722969 +0000 UTC m=+1034.762795195" watchObservedRunningTime="2026-01-23 12:09:50.59742364 +0000 UTC m=+1034.766495866" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.617838 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=39.291562526 podStartE2EDuration="1m12.617822581s" podCreationTimestamp="2026-01-23 12:08:38 +0000 UTC" firstStartedPulling="2026-01-23 12:09:16.783739888 +0000 UTC m=+1000.952812114" lastFinishedPulling="2026-01-23 12:09:50.109999933 +0000 UTC m=+1034.279072169" observedRunningTime="2026-01-23 12:09:50.614845057 +0000 UTC m=+1034.783917283" watchObservedRunningTime="2026-01-23 12:09:50.617822581 +0000 UTC m=+1034.786894807" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.708424 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bwhj2"] Jan 23 12:09:50 crc kubenswrapper[4865]: E0123 12:09:50.710044 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerName="registry-server" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.710155 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerName="registry-server" Jan 23 12:09:50 crc kubenswrapper[4865]: E0123 12:09:50.710255 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerName="extract-content" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.710309 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerName="extract-content" Jan 23 12:09:50 crc kubenswrapper[4865]: E0123 12:09:50.710398 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerName="extract-utilities" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.710451 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerName="extract-utilities" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.711439 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba9656c-fa5c-4675-a566-9ea549f1e553" containerName="registry-server" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.712929 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bwhj2" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.754728 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bwhj2"] Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.766474 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd9s8\" (UniqueName: \"kubernetes.io/projected/40b36caf-c4a8-4b90-adf0-94f77019d3aa-kube-api-access-qd9s8\") pod \"keystone-db-create-bwhj2\" (UID: \"40b36caf-c4a8-4b90-adf0-94f77019d3aa\") " pod="openstack/keystone-db-create-bwhj2" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.766579 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40b36caf-c4a8-4b90-adf0-94f77019d3aa-operator-scripts\") pod \"keystone-db-create-bwhj2\" (UID: \"40b36caf-c4a8-4b90-adf0-94f77019d3aa\") " pod="openstack/keystone-db-create-bwhj2" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.810826 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-fe22-account-create-update-g9dt9"] Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.813312 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fe22-account-create-update-g9dt9" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.817075 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.824124 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fe22-account-create-update-g9dt9"] Jan 23 12:09:50 crc kubenswrapper[4865]: E0123 12:09:50.848467 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a9eff2_ba5b_4b5e_89a9_c0aa03dfec1e.slice/crio-conmon-527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a9eff2_ba5b_4b5e_89a9_c0aa03dfec1e.slice/crio-527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667.scope\": RecentStats: unable to find data in memory cache]" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.867578 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8104e0dd-89be-4d8f-a300-8c321e2959d0-operator-scripts\") pod \"keystone-fe22-account-create-update-g9dt9\" (UID: \"8104e0dd-89be-4d8f-a300-8c321e2959d0\") " pod="openstack/keystone-fe22-account-create-update-g9dt9" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.867672 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd9s8\" (UniqueName: \"kubernetes.io/projected/40b36caf-c4a8-4b90-adf0-94f77019d3aa-kube-api-access-qd9s8\") pod \"keystone-db-create-bwhj2\" (UID: \"40b36caf-c4a8-4b90-adf0-94f77019d3aa\") " pod="openstack/keystone-db-create-bwhj2" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.867746 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40b36caf-c4a8-4b90-adf0-94f77019d3aa-operator-scripts\") pod \"keystone-db-create-bwhj2\" (UID: \"40b36caf-c4a8-4b90-adf0-94f77019d3aa\") " pod="openstack/keystone-db-create-bwhj2" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.867767 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4gh\" (UniqueName: \"kubernetes.io/projected/8104e0dd-89be-4d8f-a300-8c321e2959d0-kube-api-access-kd4gh\") pod \"keystone-fe22-account-create-update-g9dt9\" (UID: \"8104e0dd-89be-4d8f-a300-8c321e2959d0\") " pod="openstack/keystone-fe22-account-create-update-g9dt9" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.868662 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40b36caf-c4a8-4b90-adf0-94f77019d3aa-operator-scripts\") pod \"keystone-db-create-bwhj2\" (UID: \"40b36caf-c4a8-4b90-adf0-94f77019d3aa\") " pod="openstack/keystone-db-create-bwhj2" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.898422 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd9s8\" (UniqueName: \"kubernetes.io/projected/40b36caf-c4a8-4b90-adf0-94f77019d3aa-kube-api-access-qd9s8\") pod \"keystone-db-create-bwhj2\" (UID: \"40b36caf-c4a8-4b90-adf0-94f77019d3aa\") " pod="openstack/keystone-db-create-bwhj2" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.968989 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd4gh\" (UniqueName: \"kubernetes.io/projected/8104e0dd-89be-4d8f-a300-8c321e2959d0-kube-api-access-kd4gh\") pod \"keystone-fe22-account-create-update-g9dt9\" (UID: \"8104e0dd-89be-4d8f-a300-8c321e2959d0\") " pod="openstack/keystone-fe22-account-create-update-g9dt9" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.969040 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8104e0dd-89be-4d8f-a300-8c321e2959d0-operator-scripts\") pod \"keystone-fe22-account-create-update-g9dt9\" (UID: \"8104e0dd-89be-4d8f-a300-8c321e2959d0\") " pod="openstack/keystone-fe22-account-create-update-g9dt9" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.969749 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8104e0dd-89be-4d8f-a300-8c321e2959d0-operator-scripts\") pod \"keystone-fe22-account-create-update-g9dt9\" (UID: \"8104e0dd-89be-4d8f-a300-8c321e2959d0\") " pod="openstack/keystone-fe22-account-create-update-g9dt9" Jan 23 12:09:50 crc kubenswrapper[4865]: I0123 12:09:50.986242 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd4gh\" (UniqueName: \"kubernetes.io/projected/8104e0dd-89be-4d8f-a300-8c321e2959d0-kube-api-access-kd4gh\") pod \"keystone-fe22-account-create-update-g9dt9\" (UID: \"8104e0dd-89be-4d8f-a300-8c321e2959d0\") " pod="openstack/keystone-fe22-account-create-update-g9dt9" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.061280 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bwhj2" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.073714 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-v6mpl"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.074803 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v6mpl" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.091198 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v6mpl"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.171810 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/182030e3-73bd-492b-b070-7299395fd9e8-operator-scripts\") pod \"placement-db-create-v6mpl\" (UID: \"182030e3-73bd-492b-b070-7299395fd9e8\") " pod="openstack/placement-db-create-v6mpl" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.172405 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv9pt\" (UniqueName: \"kubernetes.io/projected/182030e3-73bd-492b-b070-7299395fd9e8-kube-api-access-nv9pt\") pod \"placement-db-create-v6mpl\" (UID: \"182030e3-73bd-492b-b070-7299395fd9e8\") " pod="openstack/placement-db-create-v6mpl" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.173775 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fe22-account-create-update-g9dt9" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.230395 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-2d08-account-create-update-bc5fq"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.231420 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2d08-account-create-update-bc5fq" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.233521 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.233787 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.237191 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2d08-account-create-update-bc5fq"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.276685 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/182030e3-73bd-492b-b070-7299395fd9e8-operator-scripts\") pod \"placement-db-create-v6mpl\" (UID: \"182030e3-73bd-492b-b070-7299395fd9e8\") " pod="openstack/placement-db-create-v6mpl" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.276737 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv9pt\" (UniqueName: \"kubernetes.io/projected/182030e3-73bd-492b-b070-7299395fd9e8-kube-api-access-nv9pt\") pod \"placement-db-create-v6mpl\" (UID: \"182030e3-73bd-492b-b070-7299395fd9e8\") " pod="openstack/placement-db-create-v6mpl" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.277535 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/182030e3-73bd-492b-b070-7299395fd9e8-operator-scripts\") pod \"placement-db-create-v6mpl\" (UID: \"182030e3-73bd-492b-b070-7299395fd9e8\") " pod="openstack/placement-db-create-v6mpl" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.307386 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv9pt\" (UniqueName: \"kubernetes.io/projected/182030e3-73bd-492b-b070-7299395fd9e8-kube-api-access-nv9pt\") pod \"placement-db-create-v6mpl\" (UID: \"182030e3-73bd-492b-b070-7299395fd9e8\") " pod="openstack/placement-db-create-v6mpl" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.378946 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jg2j\" (UniqueName: \"kubernetes.io/projected/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-kube-api-access-4jg2j\") pod \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.379063 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-catalog-content\") pod \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.379093 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-utilities\") pod \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\" (UID: \"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e\") " Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.379361 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-operator-scripts\") pod \"placement-2d08-account-create-update-bc5fq\" (UID: \"75a3a4e3-ba70-426a-abfc-6b8fd4c76632\") " pod="openstack/placement-2d08-account-create-update-bc5fq" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.379395 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7tqb\" (UniqueName: \"kubernetes.io/projected/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-kube-api-access-z7tqb\") pod \"placement-2d08-account-create-update-bc5fq\" (UID: \"75a3a4e3-ba70-426a-abfc-6b8fd4c76632\") " pod="openstack/placement-2d08-account-create-update-bc5fq" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.382540 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-utilities" (OuterVolumeSpecName: "utilities") pod "45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" (UID: "45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.386069 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-kube-api-access-4jg2j" (OuterVolumeSpecName: "kube-api-access-4jg2j") pod "45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" (UID: "45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e"). InnerVolumeSpecName "kube-api-access-4jg2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.392656 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-lpvc6"] Jan 23 12:09:51 crc kubenswrapper[4865]: E0123 12:09:51.392979 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerName="extract-content" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.392995 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerName="extract-content" Jan 23 12:09:51 crc kubenswrapper[4865]: E0123 12:09:51.393024 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerName="registry-server" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.393030 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerName="registry-server" Jan 23 12:09:51 crc kubenswrapper[4865]: E0123 12:09:51.393049 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerName="extract-utilities" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.393055 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerName="extract-utilities" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.393205 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerName="registry-server" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.393682 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lpvc6" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.406241 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lpvc6"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.481513 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-operator-scripts\") pod \"placement-2d08-account-create-update-bc5fq\" (UID: \"75a3a4e3-ba70-426a-abfc-6b8fd4c76632\") " pod="openstack/placement-2d08-account-create-update-bc5fq" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.481853 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7tqb\" (UniqueName: \"kubernetes.io/projected/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-kube-api-access-z7tqb\") pod \"placement-2d08-account-create-update-bc5fq\" (UID: \"75a3a4e3-ba70-426a-abfc-6b8fd4c76632\") " pod="openstack/placement-2d08-account-create-update-bc5fq" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.481930 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eec7ea5-0436-42dc-b49a-d7d9a902977b-operator-scripts\") pod \"glance-db-create-lpvc6\" (UID: \"0eec7ea5-0436-42dc-b49a-d7d9a902977b\") " pod="openstack/glance-db-create-lpvc6" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.481968 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjnq7\" (UniqueName: \"kubernetes.io/projected/0eec7ea5-0436-42dc-b49a-d7d9a902977b-kube-api-access-zjnq7\") pod \"glance-db-create-lpvc6\" (UID: \"0eec7ea5-0436-42dc-b49a-d7d9a902977b\") " pod="openstack/glance-db-create-lpvc6" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.482079 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jg2j\" (UniqueName: \"kubernetes.io/projected/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-kube-api-access-4jg2j\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.482097 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.482510 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-operator-scripts\") pod \"placement-2d08-account-create-update-bc5fq\" (UID: \"75a3a4e3-ba70-426a-abfc-6b8fd4c76632\") " pod="openstack/placement-2d08-account-create-update-bc5fq" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.497985 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7tqb\" (UniqueName: \"kubernetes.io/projected/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-kube-api-access-z7tqb\") pod \"placement-2d08-account-create-update-bc5fq\" (UID: \"75a3a4e3-ba70-426a-abfc-6b8fd4c76632\") " pod="openstack/placement-2d08-account-create-update-bc5fq" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.525715 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v6mpl" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.526762 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" (UID: "45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.560684 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0d2a-account-create-update-ht6rp"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.565908 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0d2a-account-create-update-ht6rp" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.570542 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.576894 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2d08-account-create-update-bc5fq" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.580056 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0d2a-account-create-update-ht6rp"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.584867 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eec7ea5-0436-42dc-b49a-d7d9a902977b-operator-scripts\") pod \"glance-db-create-lpvc6\" (UID: \"0eec7ea5-0436-42dc-b49a-d7d9a902977b\") " pod="openstack/glance-db-create-lpvc6" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.585197 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjnq7\" (UniqueName: \"kubernetes.io/projected/0eec7ea5-0436-42dc-b49a-d7d9a902977b-kube-api-access-zjnq7\") pod \"glance-db-create-lpvc6\" (UID: \"0eec7ea5-0436-42dc-b49a-d7d9a902977b\") " pod="openstack/glance-db-create-lpvc6" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.585457 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.586624 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eec7ea5-0436-42dc-b49a-d7d9a902977b-operator-scripts\") pod \"glance-db-create-lpvc6\" (UID: \"0eec7ea5-0436-42dc-b49a-d7d9a902977b\") " pod="openstack/glance-db-create-lpvc6" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.607101 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjnq7\" (UniqueName: \"kubernetes.io/projected/0eec7ea5-0436-42dc-b49a-d7d9a902977b-kube-api-access-zjnq7\") pod \"glance-db-create-lpvc6\" (UID: \"0eec7ea5-0436-42dc-b49a-d7d9a902977b\") " pod="openstack/glance-db-create-lpvc6" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.610472 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bwhj2"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.610786 4865 generic.go:334] "Generic (PLEG): container finished" podID="7b32b96a-bdd8-48f1-9f0a-891e76d0cd50" containerID="53821fb832774694ea5785f0987661c3ae10cebcd2820bdbf3fd05d5715af480" exitCode=0 Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.610875 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7jhmw" event={"ID":"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50","Type":"ContainerDied","Data":"53821fb832774694ea5785f0987661c3ae10cebcd2820bdbf3fd05d5715af480"} Jan 23 12:09:51 crc kubenswrapper[4865]: W0123 12:09:51.613345 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40b36caf_c4a8_4b90_adf0_94f77019d3aa.slice/crio-81a8191b68708a59f073677bbb277e5b1bb2d3c559766be0a6172b5ea597859a WatchSource:0}: Error finding container 81a8191b68708a59f073677bbb277e5b1bb2d3c559766be0a6172b5ea597859a: Status 404 returned error can't find the container with id 81a8191b68708a59f073677bbb277e5b1bb2d3c559766be0a6172b5ea597859a Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.628059 4865 generic.go:334] "Generic (PLEG): container finished" podID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" containerID="527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667" exitCode=0 Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.628710 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d8q5" event={"ID":"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e","Type":"ContainerDied","Data":"527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667"} Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.628784 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2d8q5" event={"ID":"45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e","Type":"ContainerDied","Data":"4a07fec7163d70e698a0cff0fa278473085a61a26197d3aaaa4e48e07915fb98"} Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.628828 4865 scope.go:117] "RemoveContainer" containerID="527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.629066 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2d8q5" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.675744 4865 scope.go:117] "RemoveContainer" containerID="4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.680547 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2d8q5"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.686586 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8389f4ae-eeb3-4dbf-ada2-14a152755af1-operator-scripts\") pod \"glance-0d2a-account-create-update-ht6rp\" (UID: \"8389f4ae-eeb3-4dbf-ada2-14a152755af1\") " pod="openstack/glance-0d2a-account-create-update-ht6rp" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.686761 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9rgn\" (UniqueName: \"kubernetes.io/projected/8389f4ae-eeb3-4dbf-ada2-14a152755af1-kube-api-access-v9rgn\") pod \"glance-0d2a-account-create-update-ht6rp\" (UID: \"8389f4ae-eeb3-4dbf-ada2-14a152755af1\") " pod="openstack/glance-0d2a-account-create-update-ht6rp" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.688086 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2d8q5"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.710374 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lpvc6" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.738578 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-fe22-account-create-update-g9dt9"] Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.742322 4865 scope.go:117] "RemoveContainer" containerID="2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.788554 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8389f4ae-eeb3-4dbf-ada2-14a152755af1-operator-scripts\") pod \"glance-0d2a-account-create-update-ht6rp\" (UID: \"8389f4ae-eeb3-4dbf-ada2-14a152755af1\") " pod="openstack/glance-0d2a-account-create-update-ht6rp" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.788689 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9rgn\" (UniqueName: \"kubernetes.io/projected/8389f4ae-eeb3-4dbf-ada2-14a152755af1-kube-api-access-v9rgn\") pod \"glance-0d2a-account-create-update-ht6rp\" (UID: \"8389f4ae-eeb3-4dbf-ada2-14a152755af1\") " pod="openstack/glance-0d2a-account-create-update-ht6rp" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.790125 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8389f4ae-eeb3-4dbf-ada2-14a152755af1-operator-scripts\") pod \"glance-0d2a-account-create-update-ht6rp\" (UID: \"8389f4ae-eeb3-4dbf-ada2-14a152755af1\") " pod="openstack/glance-0d2a-account-create-update-ht6rp" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.824807 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9rgn\" (UniqueName: \"kubernetes.io/projected/8389f4ae-eeb3-4dbf-ada2-14a152755af1-kube-api-access-v9rgn\") pod \"glance-0d2a-account-create-update-ht6rp\" (UID: \"8389f4ae-eeb3-4dbf-ada2-14a152755af1\") " pod="openstack/glance-0d2a-account-create-update-ht6rp" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.836074 4865 scope.go:117] "RemoveContainer" containerID="527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667" Jan 23 12:09:51 crc kubenswrapper[4865]: E0123 12:09:51.841350 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667\": container with ID starting with 527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667 not found: ID does not exist" containerID="527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.841405 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667"} err="failed to get container status \"527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667\": rpc error: code = NotFound desc = could not find container \"527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667\": container with ID starting with 527307e689e17a30da6df2c26f3944101283141aa01eb002d7b0d36aec821667 not found: ID does not exist" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.841430 4865 scope.go:117] "RemoveContainer" containerID="4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64" Jan 23 12:09:51 crc kubenswrapper[4865]: E0123 12:09:51.845222 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64\": container with ID starting with 4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64 not found: ID does not exist" containerID="4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.845277 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64"} err="failed to get container status \"4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64\": rpc error: code = NotFound desc = could not find container \"4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64\": container with ID starting with 4ef887ea4d280a2c7021150a086856249933eff3cf36167039205a48e7d4cc64 not found: ID does not exist" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.845313 4865 scope.go:117] "RemoveContainer" containerID="2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a" Jan 23 12:09:51 crc kubenswrapper[4865]: E0123 12:09:51.846990 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a\": container with ID starting with 2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a not found: ID does not exist" containerID="2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.847033 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a"} err="failed to get container status \"2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a\": rpc error: code = NotFound desc = could not find container \"2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a\": container with ID starting with 2d161e3ffc70e3ec9fcac0eb04b111f660a2b703b05cc19d756fad7895bcab0a not found: ID does not exist" Jan 23 12:09:51 crc kubenswrapper[4865]: I0123 12:09:51.908247 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0d2a-account-create-update-ht6rp" Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.154095 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e" path="/var/lib/kubelet/pods/45a9eff2-ba5b-4b5e-89a9-c0aa03dfec1e/volumes" Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.156575 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v6mpl"] Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.167683 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-2d08-account-create-update-bc5fq"] Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.292749 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.387506 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-lpvc6"] Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.520886 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0d2a-account-create-update-ht6rp"] Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.639474 4865 generic.go:334] "Generic (PLEG): container finished" podID="8104e0dd-89be-4d8f-a300-8c321e2959d0" containerID="68bf6361cff1b404a4396ebd4214093c58a4075ecd4e5820e8b087e73e2283c2" exitCode=0 Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.639567 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fe22-account-create-update-g9dt9" event={"ID":"8104e0dd-89be-4d8f-a300-8c321e2959d0","Type":"ContainerDied","Data":"68bf6361cff1b404a4396ebd4214093c58a4075ecd4e5820e8b087e73e2283c2"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.639592 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fe22-account-create-update-g9dt9" event={"ID":"8104e0dd-89be-4d8f-a300-8c321e2959d0","Type":"ContainerStarted","Data":"8e3fbaebb0b30211253cedf82437c2994fc1a7b7374dd1fb6bdd02b2cb90d09e"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.641836 4865 generic.go:334] "Generic (PLEG): container finished" podID="75a3a4e3-ba70-426a-abfc-6b8fd4c76632" containerID="10db599a72e80c6ade55df3c27729fa059210304b61b76f08c80551517e01dce" exitCode=0 Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.641898 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2d08-account-create-update-bc5fq" event={"ID":"75a3a4e3-ba70-426a-abfc-6b8fd4c76632","Type":"ContainerDied","Data":"10db599a72e80c6ade55df3c27729fa059210304b61b76f08c80551517e01dce"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.641940 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2d08-account-create-update-bc5fq" event={"ID":"75a3a4e3-ba70-426a-abfc-6b8fd4c76632","Type":"ContainerStarted","Data":"204caa4036e395d75260ae63f234d26e845465eea741c600a259a424429e0d10"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.643309 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0d2a-account-create-update-ht6rp" event={"ID":"8389f4ae-eeb3-4dbf-ada2-14a152755af1","Type":"ContainerStarted","Data":"5b07f504293c577c7c45b6405bf9535f44573257bde0ba29a188fd022916b183"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.654982 4865 generic.go:334] "Generic (PLEG): container finished" podID="40b36caf-c4a8-4b90-adf0-94f77019d3aa" containerID="16bfbe2f2641d69d6007aacd6b4dca44387f827b19b8327fa768d75e9a950f1f" exitCode=0 Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.655207 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bwhj2" event={"ID":"40b36caf-c4a8-4b90-adf0-94f77019d3aa","Type":"ContainerDied","Data":"16bfbe2f2641d69d6007aacd6b4dca44387f827b19b8327fa768d75e9a950f1f"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.655250 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bwhj2" event={"ID":"40b36caf-c4a8-4b90-adf0-94f77019d3aa","Type":"ContainerStarted","Data":"81a8191b68708a59f073677bbb277e5b1bb2d3c559766be0a6172b5ea597859a"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.657503 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lpvc6" event={"ID":"0eec7ea5-0436-42dc-b49a-d7d9a902977b","Type":"ContainerStarted","Data":"a418635a52a035cd40dd57bc81964a1e82f01a8fb34505a5e02edb5c09b512c0"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.657541 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lpvc6" event={"ID":"0eec7ea5-0436-42dc-b49a-d7d9a902977b","Type":"ContainerStarted","Data":"c09296a2a09efbbaf2e6e5cd321790e0c1e9a7360a2c107e433705b30aba1676"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.661142 4865 generic.go:334] "Generic (PLEG): container finished" podID="182030e3-73bd-492b-b070-7299395fd9e8" containerID="e3bef69dadbabe9f19c71d3355aeac110bd7ef5cd66a34df8d9ace106f3b8e30" exitCode=0 Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.661230 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v6mpl" event={"ID":"182030e3-73bd-492b-b070-7299395fd9e8","Type":"ContainerDied","Data":"e3bef69dadbabe9f19c71d3355aeac110bd7ef5cd66a34df8d9ace106f3b8e30"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.661577 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v6mpl" event={"ID":"182030e3-73bd-492b-b070-7299395fd9e8","Type":"ContainerStarted","Data":"643288431b50191d4cbf246278c509d5bbd7b711e4b01168c5fbd7741635af06"} Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.685800 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-lpvc6" podStartSLOduration=1.685782388 podStartE2EDuration="1.685782388s" podCreationTimestamp="2026-01-23 12:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:09:52.683688147 +0000 UTC m=+1036.852760373" watchObservedRunningTime="2026-01-23 12:09:52.685782388 +0000 UTC m=+1036.854854614" Jan 23 12:09:52 crc kubenswrapper[4865]: I0123 12:09:52.951442 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7jhmw" Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.029473 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qthkj\" (UniqueName: \"kubernetes.io/projected/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-kube-api-access-qthkj\") pod \"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50\" (UID: \"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50\") " Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.029698 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-operator-scripts\") pod \"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50\" (UID: \"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50\") " Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.030424 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b32b96a-bdd8-48f1-9f0a-891e76d0cd50" (UID: "7b32b96a-bdd8-48f1-9f0a-891e76d0cd50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.036960 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-kube-api-access-qthkj" (OuterVolumeSpecName: "kube-api-access-qthkj") pod "7b32b96a-bdd8-48f1-9f0a-891e76d0cd50" (UID: "7b32b96a-bdd8-48f1-9f0a-891e76d0cd50"). InnerVolumeSpecName "kube-api-access-qthkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.131975 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.132277 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qthkj\" (UniqueName: \"kubernetes.io/projected/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50-kube-api-access-qthkj\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.437496 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:09:53 crc kubenswrapper[4865]: E0123 12:09:53.437721 4865 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 23 12:09:53 crc kubenswrapper[4865]: E0123 12:09:53.437751 4865 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 23 12:09:53 crc kubenswrapper[4865]: E0123 12:09:53.437821 4865 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift podName:01f82f85-33db-4f45-97c6-84f6dd7689c8 nodeName:}" failed. No retries permitted until 2026-01-23 12:10:01.437799946 +0000 UTC m=+1045.606872192 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift") pod "swift-storage-0" (UID: "01f82f85-33db-4f45-97c6-84f6dd7689c8") : configmap "swift-ring-files" not found Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.670666 4865 generic.go:334] "Generic (PLEG): container finished" podID="8389f4ae-eeb3-4dbf-ada2-14a152755af1" containerID="9796ce4ffa86d8949c0573457f35eb5a11dcb5523558b2a1db5dadf190e8f6be" exitCode=0 Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.670722 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0d2a-account-create-update-ht6rp" event={"ID":"8389f4ae-eeb3-4dbf-ada2-14a152755af1","Type":"ContainerDied","Data":"9796ce4ffa86d8949c0573457f35eb5a11dcb5523558b2a1db5dadf190e8f6be"} Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.672954 4865 generic.go:334] "Generic (PLEG): container finished" podID="0eec7ea5-0436-42dc-b49a-d7d9a902977b" containerID="a418635a52a035cd40dd57bc81964a1e82f01a8fb34505a5e02edb5c09b512c0" exitCode=0 Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.673072 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lpvc6" event={"ID":"0eec7ea5-0436-42dc-b49a-d7d9a902977b","Type":"ContainerDied","Data":"a418635a52a035cd40dd57bc81964a1e82f01a8fb34505a5e02edb5c09b512c0"} Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.677419 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-7jhmw" Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.682481 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-7jhmw" event={"ID":"7b32b96a-bdd8-48f1-9f0a-891e76d0cd50","Type":"ContainerDied","Data":"65bf5955c00847ff8ab4f3baff7815b44c737dbda517cf37a0c937d937fc0f88"} Jan 23 12:09:53 crc kubenswrapper[4865]: I0123 12:09:53.682536 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65bf5955c00847ff8ab4f3baff7815b44c737dbda517cf37a0c937d937fc0f88" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.127523 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fe22-account-create-update-g9dt9" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.151836 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8104e0dd-89be-4d8f-a300-8c321e2959d0-operator-scripts\") pod \"8104e0dd-89be-4d8f-a300-8c321e2959d0\" (UID: \"8104e0dd-89be-4d8f-a300-8c321e2959d0\") " Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.151992 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd4gh\" (UniqueName: \"kubernetes.io/projected/8104e0dd-89be-4d8f-a300-8c321e2959d0-kube-api-access-kd4gh\") pod \"8104e0dd-89be-4d8f-a300-8c321e2959d0\" (UID: \"8104e0dd-89be-4d8f-a300-8c321e2959d0\") " Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.152809 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8104e0dd-89be-4d8f-a300-8c321e2959d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8104e0dd-89be-4d8f-a300-8c321e2959d0" (UID: "8104e0dd-89be-4d8f-a300-8c321e2959d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.161313 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8104e0dd-89be-4d8f-a300-8c321e2959d0-kube-api-access-kd4gh" (OuterVolumeSpecName: "kube-api-access-kd4gh") pod "8104e0dd-89be-4d8f-a300-8c321e2959d0" (UID: "8104e0dd-89be-4d8f-a300-8c321e2959d0"). InnerVolumeSpecName "kube-api-access-kd4gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.257873 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8104e0dd-89be-4d8f-a300-8c321e2959d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.257901 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd4gh\" (UniqueName: \"kubernetes.io/projected/8104e0dd-89be-4d8f-a300-8c321e2959d0-kube-api-access-kd4gh\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.309707 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bwhj2" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.319818 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2d08-account-create-update-bc5fq" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.341187 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v6mpl" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.361226 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd9s8\" (UniqueName: \"kubernetes.io/projected/40b36caf-c4a8-4b90-adf0-94f77019d3aa-kube-api-access-qd9s8\") pod \"40b36caf-c4a8-4b90-adf0-94f77019d3aa\" (UID: \"40b36caf-c4a8-4b90-adf0-94f77019d3aa\") " Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.361890 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7tqb\" (UniqueName: \"kubernetes.io/projected/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-kube-api-access-z7tqb\") pod \"75a3a4e3-ba70-426a-abfc-6b8fd4c76632\" (UID: \"75a3a4e3-ba70-426a-abfc-6b8fd4c76632\") " Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.362037 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40b36caf-c4a8-4b90-adf0-94f77019d3aa-operator-scripts\") pod \"40b36caf-c4a8-4b90-adf0-94f77019d3aa\" (UID: \"40b36caf-c4a8-4b90-adf0-94f77019d3aa\") " Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.362284 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-operator-scripts\") pod \"75a3a4e3-ba70-426a-abfc-6b8fd4c76632\" (UID: \"75a3a4e3-ba70-426a-abfc-6b8fd4c76632\") " Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.363243 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75a3a4e3-ba70-426a-abfc-6b8fd4c76632" (UID: "75a3a4e3-ba70-426a-abfc-6b8fd4c76632"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.363750 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b36caf-c4a8-4b90-adf0-94f77019d3aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40b36caf-c4a8-4b90-adf0-94f77019d3aa" (UID: "40b36caf-c4a8-4b90-adf0-94f77019d3aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.368364 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-kube-api-access-z7tqb" (OuterVolumeSpecName: "kube-api-access-z7tqb") pod "75a3a4e3-ba70-426a-abfc-6b8fd4c76632" (UID: "75a3a4e3-ba70-426a-abfc-6b8fd4c76632"). InnerVolumeSpecName "kube-api-access-z7tqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.370736 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b36caf-c4a8-4b90-adf0-94f77019d3aa-kube-api-access-qd9s8" (OuterVolumeSpecName: "kube-api-access-qd9s8") pod "40b36caf-c4a8-4b90-adf0-94f77019d3aa" (UID: "40b36caf-c4a8-4b90-adf0-94f77019d3aa"). InnerVolumeSpecName "kube-api-access-qd9s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.421102 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-7jhmw"] Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.429224 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-7jhmw"] Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.464506 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/182030e3-73bd-492b-b070-7299395fd9e8-operator-scripts\") pod \"182030e3-73bd-492b-b070-7299395fd9e8\" (UID: \"182030e3-73bd-492b-b070-7299395fd9e8\") " Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.464557 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv9pt\" (UniqueName: \"kubernetes.io/projected/182030e3-73bd-492b-b070-7299395fd9e8-kube-api-access-nv9pt\") pod \"182030e3-73bd-492b-b070-7299395fd9e8\" (UID: \"182030e3-73bd-492b-b070-7299395fd9e8\") " Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.464902 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.464919 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd9s8\" (UniqueName: \"kubernetes.io/projected/40b36caf-c4a8-4b90-adf0-94f77019d3aa-kube-api-access-qd9s8\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.464930 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7tqb\" (UniqueName: \"kubernetes.io/projected/75a3a4e3-ba70-426a-abfc-6b8fd4c76632-kube-api-access-z7tqb\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.464939 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40b36caf-c4a8-4b90-adf0-94f77019d3aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.465387 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182030e3-73bd-492b-b070-7299395fd9e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "182030e3-73bd-492b-b070-7299395fd9e8" (UID: "182030e3-73bd-492b-b070-7299395fd9e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.469124 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/182030e3-73bd-492b-b070-7299395fd9e8-kube-api-access-nv9pt" (OuterVolumeSpecName: "kube-api-access-nv9pt") pod "182030e3-73bd-492b-b070-7299395fd9e8" (UID: "182030e3-73bd-492b-b070-7299395fd9e8"). InnerVolumeSpecName "kube-api-access-nv9pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.567787 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/182030e3-73bd-492b-b070-7299395fd9e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.568167 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv9pt\" (UniqueName: \"kubernetes.io/projected/182030e3-73bd-492b-b070-7299395fd9e8-kube-api-access-nv9pt\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.665817 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.687077 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bwhj2" event={"ID":"40b36caf-c4a8-4b90-adf0-94f77019d3aa","Type":"ContainerDied","Data":"81a8191b68708a59f073677bbb277e5b1bb2d3c559766be0a6172b5ea597859a"} Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.688121 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81a8191b68708a59f073677bbb277e5b1bb2d3c559766be0a6172b5ea597859a" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.687112 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bwhj2" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.690001 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v6mpl" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.715411 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v6mpl" event={"ID":"182030e3-73bd-492b-b070-7299395fd9e8","Type":"ContainerDied","Data":"643288431b50191d4cbf246278c509d5bbd7b711e4b01168c5fbd7741635af06"} Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.715474 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="643288431b50191d4cbf246278c509d5bbd7b711e4b01168c5fbd7741635af06" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.726417 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-fe22-account-create-update-g9dt9" event={"ID":"8104e0dd-89be-4d8f-a300-8c321e2959d0","Type":"ContainerDied","Data":"8e3fbaebb0b30211253cedf82437c2994fc1a7b7374dd1fb6bdd02b2cb90d09e"} Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.726458 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e3fbaebb0b30211253cedf82437c2994fc1a7b7374dd1fb6bdd02b2cb90d09e" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.726589 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-fe22-account-create-update-g9dt9" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.731138 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-2d08-account-create-update-bc5fq" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.731206 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-2d08-account-create-update-bc5fq" event={"ID":"75a3a4e3-ba70-426a-abfc-6b8fd4c76632","Type":"ContainerDied","Data":"204caa4036e395d75260ae63f234d26e845465eea741c600a259a424429e0d10"} Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.731264 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204caa4036e395d75260ae63f234d26e845465eea741c600a259a424429e0d10" Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.803331 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cf7dc6df-xpjmc"] Jan 23 12:09:54 crc kubenswrapper[4865]: I0123 12:09:54.803515 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" podUID="15c39433-1f67-4c52-9a8f-df981b9af880" containerName="dnsmasq-dns" containerID="cri-o://f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4" gracePeriod=10 Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.147045 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0d2a-account-create-update-ht6rp" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.183835 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9rgn\" (UniqueName: \"kubernetes.io/projected/8389f4ae-eeb3-4dbf-ada2-14a152755af1-kube-api-access-v9rgn\") pod \"8389f4ae-eeb3-4dbf-ada2-14a152755af1\" (UID: \"8389f4ae-eeb3-4dbf-ada2-14a152755af1\") " Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.184030 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8389f4ae-eeb3-4dbf-ada2-14a152755af1-operator-scripts\") pod \"8389f4ae-eeb3-4dbf-ada2-14a152755af1\" (UID: \"8389f4ae-eeb3-4dbf-ada2-14a152755af1\") " Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.184886 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8389f4ae-eeb3-4dbf-ada2-14a152755af1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8389f4ae-eeb3-4dbf-ada2-14a152755af1" (UID: "8389f4ae-eeb3-4dbf-ada2-14a152755af1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.192030 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8389f4ae-eeb3-4dbf-ada2-14a152755af1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.212707 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8389f4ae-eeb3-4dbf-ada2-14a152755af1-kube-api-access-v9rgn" (OuterVolumeSpecName: "kube-api-access-v9rgn") pod "8389f4ae-eeb3-4dbf-ada2-14a152755af1" (UID: "8389f4ae-eeb3-4dbf-ada2-14a152755af1"). InnerVolumeSpecName "kube-api-access-v9rgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.286753 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lpvc6" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.295924 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.297446 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9rgn\" (UniqueName: \"kubernetes.io/projected/8389f4ae-eeb3-4dbf-ada2-14a152755af1-kube-api-access-v9rgn\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.377291 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.404008 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eec7ea5-0436-42dc-b49a-d7d9a902977b-operator-scripts\") pod \"0eec7ea5-0436-42dc-b49a-d7d9a902977b\" (UID: \"0eec7ea5-0436-42dc-b49a-d7d9a902977b\") " Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.404048 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjnq7\" (UniqueName: \"kubernetes.io/projected/0eec7ea5-0436-42dc-b49a-d7d9a902977b-kube-api-access-zjnq7\") pod \"0eec7ea5-0436-42dc-b49a-d7d9a902977b\" (UID: \"0eec7ea5-0436-42dc-b49a-d7d9a902977b\") " Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.406855 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eec7ea5-0436-42dc-b49a-d7d9a902977b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0eec7ea5-0436-42dc-b49a-d7d9a902977b" (UID: "0eec7ea5-0436-42dc-b49a-d7d9a902977b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.423775 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eec7ea5-0436-42dc-b49a-d7d9a902977b-kube-api-access-zjnq7" (OuterVolumeSpecName: "kube-api-access-zjnq7") pod "0eec7ea5-0436-42dc-b49a-d7d9a902977b" (UID: "0eec7ea5-0436-42dc-b49a-d7d9a902977b"). InnerVolumeSpecName "kube-api-access-zjnq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.452377 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.505489 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-config\") pod \"15c39433-1f67-4c52-9a8f-df981b9af880\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.506418 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-dns-svc\") pod \"15c39433-1f67-4c52-9a8f-df981b9af880\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.506551 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhggg\" (UniqueName: \"kubernetes.io/projected/15c39433-1f67-4c52-9a8f-df981b9af880-kube-api-access-rhggg\") pod \"15c39433-1f67-4c52-9a8f-df981b9af880\" (UID: \"15c39433-1f67-4c52-9a8f-df981b9af880\") " Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.507096 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eec7ea5-0436-42dc-b49a-d7d9a902977b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.507200 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjnq7\" (UniqueName: \"kubernetes.io/projected/0eec7ea5-0436-42dc-b49a-d7d9a902977b-kube-api-access-zjnq7\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.510222 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c39433-1f67-4c52-9a8f-df981b9af880-kube-api-access-rhggg" (OuterVolumeSpecName: "kube-api-access-rhggg") pod "15c39433-1f67-4c52-9a8f-df981b9af880" (UID: "15c39433-1f67-4c52-9a8f-df981b9af880"). InnerVolumeSpecName "kube-api-access-rhggg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.540463 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-config" (OuterVolumeSpecName: "config") pod "15c39433-1f67-4c52-9a8f-df981b9af880" (UID: "15c39433-1f67-4c52-9a8f-df981b9af880"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.544590 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15c39433-1f67-4c52-9a8f-df981b9af880" (UID: "15c39433-1f67-4c52-9a8f-df981b9af880"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.609319 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhggg\" (UniqueName: \"kubernetes.io/projected/15c39433-1f67-4c52-9a8f-df981b9af880-kube-api-access-rhggg\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.609356 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.609367 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15c39433-1f67-4c52-9a8f-df981b9af880-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.738984 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-lpvc6" event={"ID":"0eec7ea5-0436-42dc-b49a-d7d9a902977b","Type":"ContainerDied","Data":"c09296a2a09efbbaf2e6e5cd321790e0c1e9a7360a2c107e433705b30aba1676"} Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.739025 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c09296a2a09efbbaf2e6e5cd321790e0c1e9a7360a2c107e433705b30aba1676" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.740013 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-lpvc6" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.743798 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0d2a-account-create-update-ht6rp" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.743801 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0d2a-account-create-update-ht6rp" event={"ID":"8389f4ae-eeb3-4dbf-ada2-14a152755af1","Type":"ContainerDied","Data":"5b07f504293c577c7c45b6405bf9535f44573257bde0ba29a188fd022916b183"} Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.743847 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b07f504293c577c7c45b6405bf9535f44573257bde0ba29a188fd022916b183" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.745784 4865 generic.go:334] "Generic (PLEG): container finished" podID="15c39433-1f67-4c52-9a8f-df981b9af880" containerID="f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4" exitCode=0 Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.746769 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.747740 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" event={"ID":"15c39433-1f67-4c52-9a8f-df981b9af880","Type":"ContainerDied","Data":"f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4"} Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.747779 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cf7dc6df-xpjmc" event={"ID":"15c39433-1f67-4c52-9a8f-df981b9af880","Type":"ContainerDied","Data":"50e6491cb2d099a0fd0a0d557dfd04df7077d1aa98bd26a82263d7bf38fa7ebf"} Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.747807 4865 scope.go:117] "RemoveContainer" containerID="f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.773746 4865 scope.go:117] "RemoveContainer" containerID="f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.805055 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.823287 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cf7dc6df-xpjmc"] Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.839115 4865 scope.go:117] "RemoveContainer" containerID="f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.840324 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cf7dc6df-xpjmc"] Jan 23 12:09:55 crc kubenswrapper[4865]: E0123 12:09:55.845244 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4\": container with ID starting with f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4 not found: ID does not exist" containerID="f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.845293 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4"} err="failed to get container status \"f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4\": rpc error: code = NotFound desc = could not find container \"f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4\": container with ID starting with f80ff71f7b82814d45c7a51c48878231461f5a3da9a4255593c15fcb14e02da4 not found: ID does not exist" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.845318 4865 scope.go:117] "RemoveContainer" containerID="f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7" Jan 23 12:09:55 crc kubenswrapper[4865]: E0123 12:09:55.849422 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7\": container with ID starting with f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7 not found: ID does not exist" containerID="f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7" Jan 23 12:09:55 crc kubenswrapper[4865]: I0123 12:09:55.849460 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7"} err="failed to get container status \"f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7\": rpc error: code = NotFound desc = could not find container \"f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7\": container with ID starting with f4e5c015d1015653c17515fd919f6fa9e16b8ec19bf95e3d7d029605d68324c7 not found: ID does not exist" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030006 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 23 12:09:56 crc kubenswrapper[4865]: E0123 12:09:56.030324 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eec7ea5-0436-42dc-b49a-d7d9a902977b" containerName="mariadb-database-create" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030351 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eec7ea5-0436-42dc-b49a-d7d9a902977b" containerName="mariadb-database-create" Jan 23 12:09:56 crc kubenswrapper[4865]: E0123 12:09:56.030379 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8104e0dd-89be-4d8f-a300-8c321e2959d0" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030386 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8104e0dd-89be-4d8f-a300-8c321e2959d0" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: E0123 12:09:56.030404 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b32b96a-bdd8-48f1-9f0a-891e76d0cd50" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030410 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b32b96a-bdd8-48f1-9f0a-891e76d0cd50" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: E0123 12:09:56.030422 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182030e3-73bd-492b-b070-7299395fd9e8" containerName="mariadb-database-create" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030427 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="182030e3-73bd-492b-b070-7299395fd9e8" containerName="mariadb-database-create" Jan 23 12:09:56 crc kubenswrapper[4865]: E0123 12:09:56.030437 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b36caf-c4a8-4b90-adf0-94f77019d3aa" containerName="mariadb-database-create" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030444 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b36caf-c4a8-4b90-adf0-94f77019d3aa" containerName="mariadb-database-create" Jan 23 12:09:56 crc kubenswrapper[4865]: E0123 12:09:56.030457 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a3a4e3-ba70-426a-abfc-6b8fd4c76632" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030464 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a3a4e3-ba70-426a-abfc-6b8fd4c76632" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: E0123 12:09:56.030477 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8389f4ae-eeb3-4dbf-ada2-14a152755af1" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030484 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8389f4ae-eeb3-4dbf-ada2-14a152755af1" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: E0123 12:09:56.030505 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c39433-1f67-4c52-9a8f-df981b9af880" containerName="dnsmasq-dns" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030511 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c39433-1f67-4c52-9a8f-df981b9af880" containerName="dnsmasq-dns" Jan 23 12:09:56 crc kubenswrapper[4865]: E0123 12:09:56.030523 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c39433-1f67-4c52-9a8f-df981b9af880" containerName="init" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030528 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c39433-1f67-4c52-9a8f-df981b9af880" containerName="init" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030687 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a3a4e3-ba70-426a-abfc-6b8fd4c76632" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030698 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eec7ea5-0436-42dc-b49a-d7d9a902977b" containerName="mariadb-database-create" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030709 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="182030e3-73bd-492b-b070-7299395fd9e8" containerName="mariadb-database-create" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030716 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c39433-1f67-4c52-9a8f-df981b9af880" containerName="dnsmasq-dns" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030724 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8389f4ae-eeb3-4dbf-ada2-14a152755af1" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030735 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8104e0dd-89be-4d8f-a300-8c321e2959d0" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030742 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b32b96a-bdd8-48f1-9f0a-891e76d0cd50" containerName="mariadb-account-create-update" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.030751 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b36caf-c4a8-4b90-adf0-94f77019d3aa" containerName="mariadb-database-create" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.031532 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.033675 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.033775 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.033903 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.034616 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ng7tq" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.054594 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.115542 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82fe7f9-37d7-4874-9b2d-ba437546562f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.115588 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c82fe7f9-37d7-4874-9b2d-ba437546562f-scripts\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.115666 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82fe7f9-37d7-4874-9b2d-ba437546562f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.116034 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c82fe7f9-37d7-4874-9b2d-ba437546562f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.116081 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82fe7f9-37d7-4874-9b2d-ba437546562f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.116165 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2b8n\" (UniqueName: \"kubernetes.io/projected/c82fe7f9-37d7-4874-9b2d-ba437546562f-kube-api-access-v2b8n\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.116207 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82fe7f9-37d7-4874-9b2d-ba437546562f-config\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.135172 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c39433-1f67-4c52-9a8f-df981b9af880" path="/var/lib/kubelet/pods/15c39433-1f67-4c52-9a8f-df981b9af880/volumes" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.136970 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b32b96a-bdd8-48f1-9f0a-891e76d0cd50" path="/var/lib/kubelet/pods/7b32b96a-bdd8-48f1-9f0a-891e76d0cd50/volumes" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.217295 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82fe7f9-37d7-4874-9b2d-ba437546562f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.217339 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c82fe7f9-37d7-4874-9b2d-ba437546562f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.217361 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82fe7f9-37d7-4874-9b2d-ba437546562f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.217405 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2b8n\" (UniqueName: \"kubernetes.io/projected/c82fe7f9-37d7-4874-9b2d-ba437546562f-kube-api-access-v2b8n\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.217430 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82fe7f9-37d7-4874-9b2d-ba437546562f-config\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.217462 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82fe7f9-37d7-4874-9b2d-ba437546562f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.217483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c82fe7f9-37d7-4874-9b2d-ba437546562f-scripts\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.218228 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c82fe7f9-37d7-4874-9b2d-ba437546562f-scripts\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.218475 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c82fe7f9-37d7-4874-9b2d-ba437546562f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.219290 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c82fe7f9-37d7-4874-9b2d-ba437546562f-config\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.240520 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82fe7f9-37d7-4874-9b2d-ba437546562f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.273983 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c82fe7f9-37d7-4874-9b2d-ba437546562f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.274089 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c82fe7f9-37d7-4874-9b2d-ba437546562f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.285027 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2b8n\" (UniqueName: \"kubernetes.io/projected/c82fe7f9-37d7-4874-9b2d-ba437546562f-kube-api-access-v2b8n\") pod \"ovn-northd-0\" (UID: \"c82fe7f9-37d7-4874-9b2d-ba437546562f\") " pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.352807 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.689648 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xcpzg"] Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.690948 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.692534 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4dh5h" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.694712 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.711256 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xcpzg"] Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.725207 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-config-data\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.725290 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-db-sync-config-data\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.725333 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq58p\" (UniqueName: \"kubernetes.io/projected/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-kube-api-access-xq58p\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.725388 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-combined-ca-bundle\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.805957 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 23 12:09:56 crc kubenswrapper[4865]: W0123 12:09:56.817776 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc82fe7f9_37d7_4874_9b2d_ba437546562f.slice/crio-c6987c100a7406527de3f4cb80dfb25111d8509e1744425aaebfebf8d0f64bbb WatchSource:0}: Error finding container c6987c100a7406527de3f4cb80dfb25111d8509e1744425aaebfebf8d0f64bbb: Status 404 returned error can't find the container with id c6987c100a7406527de3f4cb80dfb25111d8509e1744425aaebfebf8d0f64bbb Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.826333 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-config-data\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.826387 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-db-sync-config-data\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.826414 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq58p\" (UniqueName: \"kubernetes.io/projected/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-kube-api-access-xq58p\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.826445 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-combined-ca-bundle\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.832490 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-db-sync-config-data\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.832995 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-combined-ca-bundle\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.833771 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-config-data\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:56 crc kubenswrapper[4865]: I0123 12:09:56.842293 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq58p\" (UniqueName: \"kubernetes.io/projected/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-kube-api-access-xq58p\") pod \"glance-db-sync-xcpzg\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:57 crc kubenswrapper[4865]: I0123 12:09:57.007011 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xcpzg" Jan 23 12:09:57 crc kubenswrapper[4865]: I0123 12:09:57.533441 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xcpzg"] Jan 23 12:09:57 crc kubenswrapper[4865]: W0123 12:09:57.550743 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2f6099c_c8bb_4dfd_83ab_8b1084df2aee.slice/crio-1bd8505dd5dcbd3645b7f0d7ad4a10360a8e4e5453e2104a24c5d8f728f1ef9d WatchSource:0}: Error finding container 1bd8505dd5dcbd3645b7f0d7ad4a10360a8e4e5453e2104a24c5d8f728f1ef9d: Status 404 returned error can't find the container with id 1bd8505dd5dcbd3645b7f0d7ad4a10360a8e4e5453e2104a24c5d8f728f1ef9d Jan 23 12:09:57 crc kubenswrapper[4865]: I0123 12:09:57.781888 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c82fe7f9-37d7-4874-9b2d-ba437546562f","Type":"ContainerStarted","Data":"1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1"} Jan 23 12:09:57 crc kubenswrapper[4865]: I0123 12:09:57.782456 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c82fe7f9-37d7-4874-9b2d-ba437546562f","Type":"ContainerStarted","Data":"c6987c100a7406527de3f4cb80dfb25111d8509e1744425aaebfebf8d0f64bbb"} Jan 23 12:09:57 crc kubenswrapper[4865]: I0123 12:09:57.782940 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xcpzg" event={"ID":"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee","Type":"ContainerStarted","Data":"1bd8505dd5dcbd3645b7f0d7ad4a10360a8e4e5453e2104a24c5d8f728f1ef9d"} Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.440050 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zpfjd"] Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.441219 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zpfjd" Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.443415 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.461799 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zpfjd"] Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.559104 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpbxk\" (UniqueName: \"kubernetes.io/projected/8d364132-9c1e-43f3-a945-821d02b9b02d-kube-api-access-bpbxk\") pod \"root-account-create-update-zpfjd\" (UID: \"8d364132-9c1e-43f3-a945-821d02b9b02d\") " pod="openstack/root-account-create-update-zpfjd" Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.559432 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d364132-9c1e-43f3-a945-821d02b9b02d-operator-scripts\") pod \"root-account-create-update-zpfjd\" (UID: \"8d364132-9c1e-43f3-a945-821d02b9b02d\") " pod="openstack/root-account-create-update-zpfjd" Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.661083 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d364132-9c1e-43f3-a945-821d02b9b02d-operator-scripts\") pod \"root-account-create-update-zpfjd\" (UID: \"8d364132-9c1e-43f3-a945-821d02b9b02d\") " pod="openstack/root-account-create-update-zpfjd" Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.661172 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpbxk\" (UniqueName: \"kubernetes.io/projected/8d364132-9c1e-43f3-a945-821d02b9b02d-kube-api-access-bpbxk\") pod \"root-account-create-update-zpfjd\" (UID: \"8d364132-9c1e-43f3-a945-821d02b9b02d\") " pod="openstack/root-account-create-update-zpfjd" Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.661938 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d364132-9c1e-43f3-a945-821d02b9b02d-operator-scripts\") pod \"root-account-create-update-zpfjd\" (UID: \"8d364132-9c1e-43f3-a945-821d02b9b02d\") " pod="openstack/root-account-create-update-zpfjd" Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.694314 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpbxk\" (UniqueName: \"kubernetes.io/projected/8d364132-9c1e-43f3-a945-821d02b9b02d-kube-api-access-bpbxk\") pod \"root-account-create-update-zpfjd\" (UID: \"8d364132-9c1e-43f3-a945-821d02b9b02d\") " pod="openstack/root-account-create-update-zpfjd" Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.773934 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zpfjd" Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.801415 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c82fe7f9-37d7-4874-9b2d-ba437546562f","Type":"ContainerStarted","Data":"ec3084d492bab7ceb70fcfc35e21c284f44b77449f4afe4db1f3ca3706de789d"} Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.801480 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.806413 4865 generic.go:334] "Generic (PLEG): container finished" podID="b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" containerID="e3def9ef2bd9b1a613c3b5dce12713d659402200fcabccbd2528eb3c4ee7e095" exitCode=0 Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.806452 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cxl62" event={"ID":"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8","Type":"ContainerDied","Data":"e3def9ef2bd9b1a613c3b5dce12713d659402200fcabccbd2528eb3c4ee7e095"} Jan 23 12:09:58 crc kubenswrapper[4865]: I0123 12:09:58.852386 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.093179875 podStartE2EDuration="3.852366508s" podCreationTimestamp="2026-01-23 12:09:55 +0000 UTC" firstStartedPulling="2026-01-23 12:09:56.819808928 +0000 UTC m=+1040.988881154" lastFinishedPulling="2026-01-23 12:09:57.578995561 +0000 UTC m=+1041.748067787" observedRunningTime="2026-01-23 12:09:58.828923032 +0000 UTC m=+1042.997995268" watchObservedRunningTime="2026-01-23 12:09:58.852366508 +0000 UTC m=+1043.021438734" Jan 23 12:09:59 crc kubenswrapper[4865]: I0123 12:09:59.233516 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zpfjd"] Jan 23 12:09:59 crc kubenswrapper[4865]: W0123 12:09:59.246165 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d364132_9c1e_43f3_a945_821d02b9b02d.slice/crio-11c36834b8eed328d393052ad73971743b851ea754e5e03a9e3e8de65d4689d3 WatchSource:0}: Error finding container 11c36834b8eed328d393052ad73971743b851ea754e5e03a9e3e8de65d4689d3: Status 404 returned error can't find the container with id 11c36834b8eed328d393052ad73971743b851ea754e5e03a9e3e8de65d4689d3 Jan 23 12:09:59 crc kubenswrapper[4865]: I0123 12:09:59.814900 4865 generic.go:334] "Generic (PLEG): container finished" podID="8d364132-9c1e-43f3-a945-821d02b9b02d" containerID="e53270601ed6c20b37cb9365a03027c97883ff794d9e021e00cbf472e29191b5" exitCode=0 Jan 23 12:09:59 crc kubenswrapper[4865]: I0123 12:09:59.815090 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zpfjd" event={"ID":"8d364132-9c1e-43f3-a945-821d02b9b02d","Type":"ContainerDied","Data":"e53270601ed6c20b37cb9365a03027c97883ff794d9e021e00cbf472e29191b5"} Jan 23 12:09:59 crc kubenswrapper[4865]: I0123 12:09:59.815290 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zpfjd" event={"ID":"8d364132-9c1e-43f3-a945-821d02b9b02d","Type":"ContainerStarted","Data":"11c36834b8eed328d393052ad73971743b851ea754e5e03a9e3e8de65d4689d3"} Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.148190 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.289516 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-etc-swift\") pod \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.289558 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-swiftconf\") pod \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.289659 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-scripts\") pod \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.289704 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-combined-ca-bundle\") pod \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.289734 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbtvz\" (UniqueName: \"kubernetes.io/projected/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-kube-api-access-nbtvz\") pod \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.289748 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-dispersionconf\") pod \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.289834 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-ring-data-devices\") pod \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\" (UID: \"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8\") " Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.290890 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" (UID: "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.291216 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" (UID: "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.296197 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-kube-api-access-nbtvz" (OuterVolumeSpecName: "kube-api-access-nbtvz") pod "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" (UID: "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8"). InnerVolumeSpecName "kube-api-access-nbtvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.300155 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" (UID: "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.312447 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-scripts" (OuterVolumeSpecName: "scripts") pod "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" (UID: "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.314340 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" (UID: "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.319729 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" (UID: "b7e81618-9835-4b8e-a9fc-4e2506ea7ed8"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.391302 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.391337 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.391352 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbtvz\" (UniqueName: \"kubernetes.io/projected/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-kube-api-access-nbtvz\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.391365 4865 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.391377 4865 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.391388 4865 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.391483 4865 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b7e81618-9835-4b8e-a9fc-4e2506ea7ed8-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.822463 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-cxl62" Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.822477 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-cxl62" event={"ID":"b7e81618-9835-4b8e-a9fc-4e2506ea7ed8","Type":"ContainerDied","Data":"1d0f4e37d787585c0dce1d313be75c018e8d2cc232ea6d3ecf87443e59f265d7"} Jan 23 12:10:00 crc kubenswrapper[4865]: I0123 12:10:00.822525 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d0f4e37d787585c0dce1d313be75c018e8d2cc232ea6d3ecf87443e59f265d7" Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.171484 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zpfjd" Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.303726 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpbxk\" (UniqueName: \"kubernetes.io/projected/8d364132-9c1e-43f3-a945-821d02b9b02d-kube-api-access-bpbxk\") pod \"8d364132-9c1e-43f3-a945-821d02b9b02d\" (UID: \"8d364132-9c1e-43f3-a945-821d02b9b02d\") " Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.304165 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d364132-9c1e-43f3-a945-821d02b9b02d-operator-scripts\") pod \"8d364132-9c1e-43f3-a945-821d02b9b02d\" (UID: \"8d364132-9c1e-43f3-a945-821d02b9b02d\") " Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.304568 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d364132-9c1e-43f3-a945-821d02b9b02d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d364132-9c1e-43f3-a945-821d02b9b02d" (UID: "8d364132-9c1e-43f3-a945-821d02b9b02d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.304775 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d364132-9c1e-43f3-a945-821d02b9b02d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.307905 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d364132-9c1e-43f3-a945-821d02b9b02d-kube-api-access-bpbxk" (OuterVolumeSpecName: "kube-api-access-bpbxk") pod "8d364132-9c1e-43f3-a945-821d02b9b02d" (UID: "8d364132-9c1e-43f3-a945-821d02b9b02d"). InnerVolumeSpecName "kube-api-access-bpbxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.405909 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpbxk\" (UniqueName: \"kubernetes.io/projected/8d364132-9c1e-43f3-a945-821d02b9b02d-kube-api-access-bpbxk\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.507657 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.525484 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/01f82f85-33db-4f45-97c6-84f6dd7689c8-etc-swift\") pod \"swift-storage-0\" (UID: \"01f82f85-33db-4f45-97c6-84f6dd7689c8\") " pod="openstack/swift-storage-0" Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.618132 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.833061 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zpfjd" event={"ID":"8d364132-9c1e-43f3-a945-821d02b9b02d","Type":"ContainerDied","Data":"11c36834b8eed328d393052ad73971743b851ea754e5e03a9e3e8de65d4689d3"} Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.833496 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c36834b8eed328d393052ad73971743b851ea754e5e03a9e3e8de65d4689d3" Jan 23 12:10:01 crc kubenswrapper[4865]: I0123 12:10:01.833192 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zpfjd" Jan 23 12:10:02 crc kubenswrapper[4865]: I0123 12:10:02.241850 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 23 12:10:02 crc kubenswrapper[4865]: I0123 12:10:02.839976 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"3b4016589f112e33bfcaa3e41d17c670b7f6e7f394155ebad4b4d0e8acda50c6"} Jan 23 12:10:04 crc kubenswrapper[4865]: I0123 12:10:04.384003 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zpfjd"] Jan 23 12:10:04 crc kubenswrapper[4865]: I0123 12:10:04.392980 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zpfjd"] Jan 23 12:10:05 crc kubenswrapper[4865]: I0123 12:10:05.861511 4865 generic.go:334] "Generic (PLEG): container finished" podID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerID="5e4cca6ecb16f4a92d5899f94604c447d30034fd3d17d308d5dacb49f13a795c" exitCode=0 Jan 23 12:10:05 crc kubenswrapper[4865]: I0123 12:10:05.861571 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10a07490-f361-43e5-8d3e-a8bd917b3b84","Type":"ContainerDied","Data":"5e4cca6ecb16f4a92d5899f94604c447d30034fd3d17d308d5dacb49f13a795c"} Jan 23 12:10:05 crc kubenswrapper[4865]: I0123 12:10:05.862830 4865 generic.go:334] "Generic (PLEG): container finished" podID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerID="fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8" exitCode=0 Jan 23 12:10:05 crc kubenswrapper[4865]: I0123 12:10:05.862854 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ebb7983c-3aed-42f5-8635-8188f7abb9d5","Type":"ContainerDied","Data":"fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8"} Jan 23 12:10:06 crc kubenswrapper[4865]: I0123 12:10:06.128971 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d364132-9c1e-43f3-a945-821d02b9b02d" path="/var/lib/kubelet/pods/8d364132-9c1e-43f3-a945-821d02b9b02d/volumes" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.416313 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vfxkd"] Jan 23 12:10:09 crc kubenswrapper[4865]: E0123 12:10:09.417307 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" containerName="swift-ring-rebalance" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.417323 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" containerName="swift-ring-rebalance" Jan 23 12:10:09 crc kubenswrapper[4865]: E0123 12:10:09.417352 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d364132-9c1e-43f3-a945-821d02b9b02d" containerName="mariadb-account-create-update" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.417360 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d364132-9c1e-43f3-a945-821d02b9b02d" containerName="mariadb-account-create-update" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.417554 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7e81618-9835-4b8e-a9fc-4e2506ea7ed8" containerName="swift-ring-rebalance" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.417575 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d364132-9c1e-43f3-a945-821d02b9b02d" containerName="mariadb-account-create-update" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.418349 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfxkd" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.431683 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vfxkd"] Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.433915 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.547583 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54qm\" (UniqueName: \"kubernetes.io/projected/9788e5dc-7889-456b-934c-09bf3aa01f25-kube-api-access-f54qm\") pod \"root-account-create-update-vfxkd\" (UID: \"9788e5dc-7889-456b-934c-09bf3aa01f25\") " pod="openstack/root-account-create-update-vfxkd" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.547815 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9788e5dc-7889-456b-934c-09bf3aa01f25-operator-scripts\") pod \"root-account-create-update-vfxkd\" (UID: \"9788e5dc-7889-456b-934c-09bf3aa01f25\") " pod="openstack/root-account-create-update-vfxkd" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.649577 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f54qm\" (UniqueName: \"kubernetes.io/projected/9788e5dc-7889-456b-934c-09bf3aa01f25-kube-api-access-f54qm\") pod \"root-account-create-update-vfxkd\" (UID: \"9788e5dc-7889-456b-934c-09bf3aa01f25\") " pod="openstack/root-account-create-update-vfxkd" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.649663 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9788e5dc-7889-456b-934c-09bf3aa01f25-operator-scripts\") pod \"root-account-create-update-vfxkd\" (UID: \"9788e5dc-7889-456b-934c-09bf3aa01f25\") " pod="openstack/root-account-create-update-vfxkd" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.650307 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9788e5dc-7889-456b-934c-09bf3aa01f25-operator-scripts\") pod \"root-account-create-update-vfxkd\" (UID: \"9788e5dc-7889-456b-934c-09bf3aa01f25\") " pod="openstack/root-account-create-update-vfxkd" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.667304 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f54qm\" (UniqueName: \"kubernetes.io/projected/9788e5dc-7889-456b-934c-09bf3aa01f25-kube-api-access-f54qm\") pod \"root-account-create-update-vfxkd\" (UID: \"9788e5dc-7889-456b-934c-09bf3aa01f25\") " pod="openstack/root-account-create-update-vfxkd" Jan 23 12:10:09 crc kubenswrapper[4865]: I0123 12:10:09.736160 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfxkd" Jan 23 12:10:11 crc kubenswrapper[4865]: I0123 12:10:11.407326 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 12:10:13 crc kubenswrapper[4865]: I0123 12:10:13.187126 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" probeResult="failure" output=< Jan 23 12:10:13 crc kubenswrapper[4865]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 12:10:13 crc kubenswrapper[4865]: > Jan 23 12:10:13 crc kubenswrapper[4865]: E0123 12:10:13.891476 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-swift-account:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:10:13 crc kubenswrapper[4865]: E0123 12:10:13.891815 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-swift-account:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:10:13 crc kubenswrapper[4865]: E0123 12:10:13.891929 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:account-server,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-swift-account:c3923531bcda0b0811b2d5053f189beb,Command:[/usr/bin/swift-account-server /etc/swift/account-server.conf.d -v],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:account,HostPort:0,ContainerPort:6202,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndch584hb8h68ch549h56bhb6h56chd9h558h659hf8h575hf4h66dh567h686h68ch79h5cfh9h64hd8h5bch5c9h574hcbh589h54ch59bh584h8fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:swift,ReadOnly:false,MountPath:/srv/node/pv,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cache,ReadOnly:false,MountPath:/var/cache/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lock,ReadOnly:false,MountPath:/var/lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kv9dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-storage-0_openstack(01f82f85-33db-4f45-97c6-84f6dd7689c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:10:14 crc kubenswrapper[4865]: I0123 12:10:14.616911 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vfxkd"] Jan 23 12:10:14 crc kubenswrapper[4865]: I0123 12:10:14.928498 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xcpzg" event={"ID":"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee","Type":"ContainerStarted","Data":"ad7026dd1880909fa3617e0bc706c84f44e7de2456c653ffc29c3bcdcd365689"} Jan 23 12:10:14 crc kubenswrapper[4865]: I0123 12:10:14.935859 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10a07490-f361-43e5-8d3e-a8bd917b3b84","Type":"ContainerStarted","Data":"25508c7568020837db3aff1bf4699e4717036225593f03472819e46d410c7752"} Jan 23 12:10:14 crc kubenswrapper[4865]: I0123 12:10:14.936125 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 12:10:14 crc kubenswrapper[4865]: I0123 12:10:14.937711 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfxkd" event={"ID":"9788e5dc-7889-456b-934c-09bf3aa01f25","Type":"ContainerStarted","Data":"6a66ff237f4fa45b3b1217619406d39eb8ac7226e40a83444ce20bfeb6ed1558"} Jan 23 12:10:14 crc kubenswrapper[4865]: I0123 12:10:14.937757 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfxkd" event={"ID":"9788e5dc-7889-456b-934c-09bf3aa01f25","Type":"ContainerStarted","Data":"97d9b32e5096203cbdab976a200b1d3657ec95da335bdf7581fa95c27e26102d"} Jan 23 12:10:14 crc kubenswrapper[4865]: I0123 12:10:14.940082 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ebb7983c-3aed-42f5-8635-8188f7abb9d5","Type":"ContainerStarted","Data":"5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813"} Jan 23 12:10:14 crc kubenswrapper[4865]: I0123 12:10:14.940272 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:10:14 crc kubenswrapper[4865]: I0123 12:10:14.975926 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xcpzg" podStartSLOduration=2.346711783 podStartE2EDuration="18.975909283s" podCreationTimestamp="2026-01-23 12:09:56 +0000 UTC" firstStartedPulling="2026-01-23 12:09:57.569532409 +0000 UTC m=+1041.738604635" lastFinishedPulling="2026-01-23 12:10:14.198729909 +0000 UTC m=+1058.367802135" observedRunningTime="2026-01-23 12:10:14.953017882 +0000 UTC m=+1059.122090108" watchObservedRunningTime="2026-01-23 12:10:14.975909283 +0000 UTC m=+1059.144981509" Jan 23 12:10:14 crc kubenswrapper[4865]: I0123 12:10:14.978403 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.941514247 podStartE2EDuration="1m47.978394784s" podCreationTimestamp="2026-01-23 12:08:27 +0000 UTC" firstStartedPulling="2026-01-23 12:08:29.185824493 +0000 UTC m=+953.354896709" lastFinishedPulling="2026-01-23 12:09:29.22270502 +0000 UTC m=+1013.391777246" observedRunningTime="2026-01-23 12:10:14.972565581 +0000 UTC m=+1059.141637807" watchObservedRunningTime="2026-01-23 12:10:14.978394784 +0000 UTC m=+1059.147467010" Jan 23 12:10:15 crc kubenswrapper[4865]: I0123 12:10:15.000879 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=49.395104503 podStartE2EDuration="1m49.000864005s" podCreationTimestamp="2026-01-23 12:08:26 +0000 UTC" firstStartedPulling="2026-01-23 12:08:29.607860025 +0000 UTC m=+953.776932251" lastFinishedPulling="2026-01-23 12:09:29.213619527 +0000 UTC m=+1013.382691753" observedRunningTime="2026-01-23 12:10:14.99857987 +0000 UTC m=+1059.167652096" watchObservedRunningTime="2026-01-23 12:10:15.000864005 +0000 UTC m=+1059.169936231" Jan 23 12:10:15 crc kubenswrapper[4865]: I0123 12:10:15.023850 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-vfxkd" podStartSLOduration=6.023832099 podStartE2EDuration="6.023832099s" podCreationTimestamp="2026-01-23 12:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:10:15.019482893 +0000 UTC m=+1059.188555119" watchObservedRunningTime="2026-01-23 12:10:15.023832099 +0000 UTC m=+1059.192904325" Jan 23 12:10:15 crc kubenswrapper[4865]: I0123 12:10:15.948538 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"ae94a524af87b6aea909714bd413afb629d6b28a9389b2ddbe0b55f5fae25e6c"} Jan 23 12:10:15 crc kubenswrapper[4865]: I0123 12:10:15.950928 4865 generic.go:334] "Generic (PLEG): container finished" podID="9788e5dc-7889-456b-934c-09bf3aa01f25" containerID="6a66ff237f4fa45b3b1217619406d39eb8ac7226e40a83444ce20bfeb6ed1558" exitCode=0 Jan 23 12:10:15 crc kubenswrapper[4865]: I0123 12:10:15.951453 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfxkd" event={"ID":"9788e5dc-7889-456b-934c-09bf3aa01f25","Type":"ContainerDied","Data":"6a66ff237f4fa45b3b1217619406d39eb8ac7226e40a83444ce20bfeb6ed1558"} Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.228136 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.247969 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.351461 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfxkd" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.375300 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9788e5dc-7889-456b-934c-09bf3aa01f25-operator-scripts\") pod \"9788e5dc-7889-456b-934c-09bf3aa01f25\" (UID: \"9788e5dc-7889-456b-934c-09bf3aa01f25\") " Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.375560 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f54qm\" (UniqueName: \"kubernetes.io/projected/9788e5dc-7889-456b-934c-09bf3aa01f25-kube-api-access-f54qm\") pod \"9788e5dc-7889-456b-934c-09bf3aa01f25\" (UID: \"9788e5dc-7889-456b-934c-09bf3aa01f25\") " Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.376919 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9788e5dc-7889-456b-934c-09bf3aa01f25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9788e5dc-7889-456b-934c-09bf3aa01f25" (UID: "9788e5dc-7889-456b-934c-09bf3aa01f25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.411829 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" probeResult="failure" output=< Jan 23 12:10:17 crc kubenswrapper[4865]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 23 12:10:17 crc kubenswrapper[4865]: > Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.413116 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9788e5dc-7889-456b-934c-09bf3aa01f25-kube-api-access-f54qm" (OuterVolumeSpecName: "kube-api-access-f54qm") pod "9788e5dc-7889-456b-934c-09bf3aa01f25" (UID: "9788e5dc-7889-456b-934c-09bf3aa01f25"). InnerVolumeSpecName "kube-api-access-f54qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.478095 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f54qm\" (UniqueName: \"kubernetes.io/projected/9788e5dc-7889-456b-934c-09bf3aa01f25-kube-api-access-f54qm\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.478170 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9788e5dc-7889-456b-934c-09bf3aa01f25-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.492919 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hz4vm-config-9srxx"] Jan 23 12:10:17 crc kubenswrapper[4865]: E0123 12:10:17.493413 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9788e5dc-7889-456b-934c-09bf3aa01f25" containerName="mariadb-account-create-update" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.493509 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9788e5dc-7889-456b-934c-09bf3aa01f25" containerName="mariadb-account-create-update" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.493739 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9788e5dc-7889-456b-934c-09bf3aa01f25" containerName="mariadb-account-create-update" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.494333 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.496245 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.505714 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hz4vm-config-9srxx"] Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.580977 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcmtt\" (UniqueName: \"kubernetes.io/projected/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-kube-api-access-fcmtt\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.581036 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.581117 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-log-ovn\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.581192 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-additional-scripts\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.581295 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run-ovn\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.581358 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-scripts\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.684148 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.684206 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcmtt\" (UniqueName: \"kubernetes.io/projected/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-kube-api-access-fcmtt\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.684243 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-log-ovn\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.684277 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-additional-scripts\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.684312 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run-ovn\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.684338 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-scripts\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.686143 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-scripts\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.686421 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.686715 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-log-ovn\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.687114 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-additional-scripts\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.687168 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run-ovn\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.716245 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcmtt\" (UniqueName: \"kubernetes.io/projected/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-kube-api-access-fcmtt\") pod \"ovn-controller-hz4vm-config-9srxx\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.811947 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.973748 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"11494f08513183f44752d8d6d1986166a8f16473b1ca18d79d20546d8ab24eb2"} Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.974023 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"0e6d5e94ed759e4541d88e61e682172c9485406dcd6088a9d3880b071462834b"} Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.976654 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vfxkd" Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.976733 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vfxkd" event={"ID":"9788e5dc-7889-456b-934c-09bf3aa01f25","Type":"ContainerDied","Data":"97d9b32e5096203cbdab976a200b1d3657ec95da335bdf7581fa95c27e26102d"} Jan 23 12:10:17 crc kubenswrapper[4865]: I0123 12:10:17.976752 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97d9b32e5096203cbdab976a200b1d3657ec95da335bdf7581fa95c27e26102d" Jan 23 12:10:18 crc kubenswrapper[4865]: I0123 12:10:18.114012 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hz4vm-config-9srxx"] Jan 23 12:10:18 crc kubenswrapper[4865]: I0123 12:10:18.776751 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:10:18 crc kubenswrapper[4865]: I0123 12:10:18.777037 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:10:18 crc kubenswrapper[4865]: I0123 12:10:18.777098 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:10:18 crc kubenswrapper[4865]: I0123 12:10:18.777942 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"345cdb54622a6a314c05af6fc9f3dea4d21afb272e6e5c0d8f125f9458dfa194"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:10:18 crc kubenswrapper[4865]: I0123 12:10:18.777992 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://345cdb54622a6a314c05af6fc9f3dea4d21afb272e6e5c0d8f125f9458dfa194" gracePeriod=600 Jan 23 12:10:18 crc kubenswrapper[4865]: I0123 12:10:18.986807 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm-config-9srxx" event={"ID":"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad","Type":"ContainerStarted","Data":"c914ceab4d869f7b580f1846489fae378d704b7e450b541c717397b7894f4daf"} Jan 23 12:10:18 crc kubenswrapper[4865]: I0123 12:10:18.986866 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm-config-9srxx" event={"ID":"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad","Type":"ContainerStarted","Data":"ddcdf939d27d9200b068b210edad6e82078d63ad66c7085c4554c1df1beeaac7"} Jan 23 12:10:18 crc kubenswrapper[4865]: I0123 12:10:18.991672 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"5033f92b05f00a9848b62aa14428edffc167891b8cfbf8c29e4834011c855153"} Jan 23 12:10:19 crc kubenswrapper[4865]: I0123 12:10:19.007480 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hz4vm-config-9srxx" podStartSLOduration=2.007464989 podStartE2EDuration="2.007464989s" podCreationTimestamp="2026-01-23 12:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:10:19.003266146 +0000 UTC m=+1063.172338372" watchObservedRunningTime="2026-01-23 12:10:19.007464989 +0000 UTC m=+1063.176537205" Jan 23 12:10:20 crc kubenswrapper[4865]: I0123 12:10:20.001930 4865 generic.go:334] "Generic (PLEG): container finished" podID="a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" containerID="c914ceab4d869f7b580f1846489fae378d704b7e450b541c717397b7894f4daf" exitCode=0 Jan 23 12:10:20 crc kubenswrapper[4865]: I0123 12:10:20.002042 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm-config-9srxx" event={"ID":"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad","Type":"ContainerDied","Data":"c914ceab4d869f7b580f1846489fae378d704b7e450b541c717397b7894f4daf"} Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.011458 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="345cdb54622a6a314c05af6fc9f3dea4d21afb272e6e5c0d8f125f9458dfa194" exitCode=0 Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.011682 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"345cdb54622a6a314c05af6fc9f3dea4d21afb272e6e5c0d8f125f9458dfa194"} Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.011721 4865 scope.go:117] "RemoveContainer" containerID="6d9cd586c30c8b5457d84dc80396ec6c6d5bb6dd4d7eb00e56b29553f41be78a" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.365474 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.448041 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run\") pod \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.448088 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-log-ovn\") pod \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.448109 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run" (OuterVolumeSpecName: "var-run") pod "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" (UID: "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.448196 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcmtt\" (UniqueName: \"kubernetes.io/projected/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-kube-api-access-fcmtt\") pod \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.448209 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" (UID: "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.448224 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-scripts\") pod \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.448410 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run-ovn\") pod \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.448451 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-additional-scripts\") pod \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\" (UID: \"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad\") " Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.449140 4865 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.449160 4865 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.449305 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-scripts" (OuterVolumeSpecName: "scripts") pod "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" (UID: "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.449355 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" (UID: "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.449657 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" (UID: "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.467583 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-kube-api-access-fcmtt" (OuterVolumeSpecName: "kube-api-access-fcmtt") pod "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" (UID: "a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad"). InnerVolumeSpecName "kube-api-access-fcmtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.550224 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcmtt\" (UniqueName: \"kubernetes.io/projected/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-kube-api-access-fcmtt\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.550259 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.550269 4865 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:21 crc kubenswrapper[4865]: I0123 12:10:21.550278 4865 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.019790 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm-config-9srxx" event={"ID":"a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad","Type":"ContainerDied","Data":"ddcdf939d27d9200b068b210edad6e82078d63ad66c7085c4554c1df1beeaac7"} Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.020076 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddcdf939d27d9200b068b210edad6e82078d63ad66c7085c4554c1df1beeaac7" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.019831 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hz4vm-config-9srxx" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.022227 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"e8f1e0b3d016dae118cc529905287d1f5d83d908d73deab63599d7b4262f2021"} Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.128538 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hz4vm-config-9srxx"] Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.128571 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hz4vm-config-9srxx"] Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.245021 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hz4vm-config-lqqht"] Jan 23 12:10:22 crc kubenswrapper[4865]: E0123 12:10:22.245348 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" containerName="ovn-config" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.245365 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" containerName="ovn-config" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.245550 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" containerName="ovn-config" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.246252 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.248907 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.260024 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j4jv\" (UniqueName: \"kubernetes.io/projected/c9a542e6-a9df-4f77-a583-36f51f0f7db5-kube-api-access-2j4jv\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.260155 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.260180 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-additional-scripts\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.260215 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-log-ovn\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.260231 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-scripts\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.260256 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run-ovn\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.273890 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hz4vm-config-lqqht"] Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.365441 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run-ovn\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.365508 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j4jv\" (UniqueName: \"kubernetes.io/projected/c9a542e6-a9df-4f77-a583-36f51f0f7db5-kube-api-access-2j4jv\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.365588 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.365637 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-additional-scripts\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.365665 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-log-ovn\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.365681 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-scripts\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.365809 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run-ovn\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.365876 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.366613 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-log-ovn\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.366706 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-additional-scripts\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.367458 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-scripts\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.409503 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j4jv\" (UniqueName: \"kubernetes.io/projected/c9a542e6-a9df-4f77-a583-36f51f0f7db5-kube-api-access-2j4jv\") pod \"ovn-controller-hz4vm-config-lqqht\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.529419 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hz4vm" Jan 23 12:10:22 crc kubenswrapper[4865]: I0123 12:10:22.563955 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:23 crc kubenswrapper[4865]: I0123 12:10:23.574317 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hz4vm-config-lqqht"] Jan 23 12:10:24 crc kubenswrapper[4865]: I0123 12:10:24.036378 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"3378c14edaa07aaa54b42bba5ff09addd87dc5e8c518ae175f143c8dc81e2b80"} Jan 23 12:10:24 crc kubenswrapper[4865]: I0123 12:10:24.036898 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"2d79863d7d4e72ca19224d3c70062b240868f81e43f0b1dd130022505633d7a2"} Jan 23 12:10:24 crc kubenswrapper[4865]: I0123 12:10:24.037481 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm-config-lqqht" event={"ID":"c9a542e6-a9df-4f77-a583-36f51f0f7db5","Type":"ContainerStarted","Data":"6efb8ee2a01774c83baf5e88dcf56820583bb17b76d503cff132ff1fba71eb2a"} Jan 23 12:10:24 crc kubenswrapper[4865]: I0123 12:10:24.037526 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm-config-lqqht" event={"ID":"c9a542e6-a9df-4f77-a583-36f51f0f7db5","Type":"ContainerStarted","Data":"93d6e9e5f32cf8a2e9d493130c61dff0b9bbbe3a36787f012cc6d1e3d42491e0"} Jan 23 12:10:24 crc kubenswrapper[4865]: I0123 12:10:24.065198 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hz4vm-config-lqqht" podStartSLOduration=2.065177597 podStartE2EDuration="2.065177597s" podCreationTimestamp="2026-01-23 12:10:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:10:24.061920387 +0000 UTC m=+1068.230992613" watchObservedRunningTime="2026-01-23 12:10:24.065177597 +0000 UTC m=+1068.234249823" Jan 23 12:10:24 crc kubenswrapper[4865]: I0123 12:10:24.127150 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad" path="/var/lib/kubelet/pods/a3b4f54e-cdce-48b5-9f31-4e6e7dc73cad/volumes" Jan 23 12:10:25 crc kubenswrapper[4865]: I0123 12:10:25.048216 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"5c8e0b01b5301bd04d94bb183fd2764f8ac6ccce61cc1f62afa201adc93ea67b"} Jan 23 12:10:25 crc kubenswrapper[4865]: I0123 12:10:25.048509 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"7d0d8b7bfa81c3381669d6e87b1882dc7105155330b2e1e1c895c8a9ef253365"} Jan 23 12:10:25 crc kubenswrapper[4865]: I0123 12:10:25.049501 4865 generic.go:334] "Generic (PLEG): container finished" podID="c9a542e6-a9df-4f77-a583-36f51f0f7db5" containerID="6efb8ee2a01774c83baf5e88dcf56820583bb17b76d503cff132ff1fba71eb2a" exitCode=0 Jan 23 12:10:25 crc kubenswrapper[4865]: I0123 12:10:25.049524 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm-config-lqqht" event={"ID":"c9a542e6-a9df-4f77-a583-36f51f0f7db5","Type":"ContainerDied","Data":"6efb8ee2a01774c83baf5e88dcf56820583bb17b76d503cff132ff1fba71eb2a"} Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.355394 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.556244 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run\") pod \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.556310 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j4jv\" (UniqueName: \"kubernetes.io/projected/c9a542e6-a9df-4f77-a583-36f51f0f7db5-kube-api-access-2j4jv\") pod \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.556423 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-log-ovn\") pod \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.556477 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run" (OuterVolumeSpecName: "var-run") pod "c9a542e6-a9df-4f77-a583-36f51f0f7db5" (UID: "c9a542e6-a9df-4f77-a583-36f51f0f7db5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.556509 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-additional-scripts\") pod \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.556537 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c9a542e6-a9df-4f77-a583-36f51f0f7db5" (UID: "c9a542e6-a9df-4f77-a583-36f51f0f7db5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.556550 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run-ovn\") pod \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.556689 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-scripts\") pod \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\" (UID: \"c9a542e6-a9df-4f77-a583-36f51f0f7db5\") " Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.556746 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c9a542e6-a9df-4f77-a583-36f51f0f7db5" (UID: "c9a542e6-a9df-4f77-a583-36f51f0f7db5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.557258 4865 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.557316 4865 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-run\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.557335 4865 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a542e6-a9df-4f77-a583-36f51f0f7db5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.557267 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c9a542e6-a9df-4f77-a583-36f51f0f7db5" (UID: "c9a542e6-a9df-4f77-a583-36f51f0f7db5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.557572 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-scripts" (OuterVolumeSpecName: "scripts") pod "c9a542e6-a9df-4f77-a583-36f51f0f7db5" (UID: "c9a542e6-a9df-4f77-a583-36f51f0f7db5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.565951 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a542e6-a9df-4f77-a583-36f51f0f7db5-kube-api-access-2j4jv" (OuterVolumeSpecName: "kube-api-access-2j4jv") pod "c9a542e6-a9df-4f77-a583-36f51f0f7db5" (UID: "c9a542e6-a9df-4f77-a583-36f51f0f7db5"). InnerVolumeSpecName "kube-api-access-2j4jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.639783 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hz4vm-config-lqqht"] Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.651253 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hz4vm-config-lqqht"] Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.658591 4865 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.658641 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a542e6-a9df-4f77-a583-36f51f0f7db5-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:26 crc kubenswrapper[4865]: I0123 12:10:26.658655 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j4jv\" (UniqueName: \"kubernetes.io/projected/c9a542e6-a9df-4f77-a583-36f51f0f7db5-kube-api-access-2j4jv\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:27 crc kubenswrapper[4865]: I0123 12:10:27.064343 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93d6e9e5f32cf8a2e9d493130c61dff0b9bbbe3a36787f012cc6d1e3d42491e0" Jan 23 12:10:27 crc kubenswrapper[4865]: I0123 12:10:27.064376 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hz4vm-config-lqqht" Jan 23 12:10:28 crc kubenswrapper[4865]: I0123 12:10:28.127886 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a542e6-a9df-4f77-a583-36f51f0f7db5" path="/var/lib/kubelet/pods/c9a542e6-a9df-4f77-a583-36f51f0f7db5/volumes" Jan 23 12:10:28 crc kubenswrapper[4865]: I0123 12:10:28.485893 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 23 12:10:28 crc kubenswrapper[4865]: I0123 12:10:28.814690 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Jan 23 12:10:30 crc kubenswrapper[4865]: I0123 12:10:30.093210 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"b9dd8b3769f8e8f103876c6cc63fc9062237c88c4e5d9176b2412165c8524d56"} Jan 23 12:10:31 crc kubenswrapper[4865]: I0123 12:10:31.106818 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"4f50474a6450feb8b00cd0bd76e142d81d7c6374eabdf628f021ebfa0d50a359"} Jan 23 12:10:31 crc kubenswrapper[4865]: E0123 12:10:31.542541 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"account-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"account-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-swift-account:c3923531bcda0b0811b2d5053f189beb\\\"\", failed to \"StartContainer\" for \"account-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-swift-account:c3923531bcda0b0811b2d5053f189beb\\\"\", failed to \"StartContainer\" for \"account-reaper\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-swift-account:c3923531bcda0b0811b2d5053f189beb\\\"\"]" pod="openstack/swift-storage-0" podUID="01f82f85-33db-4f45-97c6-84f6dd7689c8" Jan 23 12:10:32 crc kubenswrapper[4865]: I0123 12:10:32.116876 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"37bb6404da547a6f20866b9945563dce58932e50e9ad6db0eaa96934b2eed867"} Jan 23 12:10:35 crc kubenswrapper[4865]: I0123 12:10:35.146313 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"61091f4d06a136dae6cd48183283ae28dbb5786da7afb0ac6e938330448a1c93"} Jan 23 12:10:36 crc kubenswrapper[4865]: I0123 12:10:36.160239 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"65b9b9295750b3ee9a68ebd534ad501ab570f8d56758c559b8ebbba9f3648ea2"} Jan 23 12:10:37 crc kubenswrapper[4865]: I0123 12:10:37.172928 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"83f2dca47ec4b39102f80646f650700a87420533c398dc470bb122b52730d060"} Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.185332 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"01f82f85-33db-4f45-97c6-84f6dd7689c8","Type":"ContainerStarted","Data":"71055441522a8ba8e50e7b3df4199dfec50599d4cb70129c2c1f82d6a8ecf158"} Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.220537 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.636957257 podStartE2EDuration="54.220515053s" podCreationTimestamp="2026-01-23 12:09:44 +0000 UTC" firstStartedPulling="2026-01-23 12:10:02.24947011 +0000 UTC m=+1046.418542336" lastFinishedPulling="2026-01-23 12:10:33.833027906 +0000 UTC m=+1078.002100132" observedRunningTime="2026-01-23 12:10:38.215820108 +0000 UTC m=+1082.384892354" watchObservedRunningTime="2026-01-23 12:10:38.220515053 +0000 UTC m=+1082.389587299" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.484880 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.591412 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dbdb8dfbf-spkmz"] Jan 23 12:10:38 crc kubenswrapper[4865]: E0123 12:10:38.591754 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a542e6-a9df-4f77-a583-36f51f0f7db5" containerName="ovn-config" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.591770 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a542e6-a9df-4f77-a583-36f51f0f7db5" containerName="ovn-config" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.591937 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a542e6-a9df-4f77-a583-36f51f0f7db5" containerName="ovn-config" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.592718 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.596487 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.651545 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dbdb8dfbf-spkmz"] Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.697127 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-sb\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.697422 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-nb\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.697559 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-svc\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.697748 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgxnb\" (UniqueName: \"kubernetes.io/projected/6a124ae5-cc93-44bb-9217-4d95a318568c-kube-api-access-jgxnb\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.697892 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-config\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.698085 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-swift-storage-0\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.800096 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-sb\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.800391 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-nb\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.800501 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-svc\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.800592 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgxnb\" (UniqueName: \"kubernetes.io/projected/6a124ae5-cc93-44bb-9217-4d95a318568c-kube-api-access-jgxnb\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.800726 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-config\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.800832 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-swift-storage-0\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.801021 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-sb\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.801412 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-nb\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.801476 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-svc\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.801545 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-swift-storage-0\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.802113 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-config\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.811981 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.824715 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgxnb\" (UniqueName: \"kubernetes.io/projected/6a124ae5-cc93-44bb-9217-4d95a318568c-kube-api-access-jgxnb\") pod \"dnsmasq-dns-5dbdb8dfbf-spkmz\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:38 crc kubenswrapper[4865]: I0123 12:10:38.919554 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:39 crc kubenswrapper[4865]: I0123 12:10:39.164473 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dbdb8dfbf-spkmz"] Jan 23 12:10:39 crc kubenswrapper[4865]: W0123 12:10:39.168375 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a124ae5_cc93_44bb_9217_4d95a318568c.slice/crio-443891b0385549c39baa0804fb5bc5fd6dbd1146ccfc2d5c5b709e7c93f86484 WatchSource:0}: Error finding container 443891b0385549c39baa0804fb5bc5fd6dbd1146ccfc2d5c5b709e7c93f86484: Status 404 returned error can't find the container with id 443891b0385549c39baa0804fb5bc5fd6dbd1146ccfc2d5c5b709e7c93f86484 Jan 23 12:10:39 crc kubenswrapper[4865]: I0123 12:10:39.199893 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" event={"ID":"6a124ae5-cc93-44bb-9217-4d95a318568c","Type":"ContainerStarted","Data":"443891b0385549c39baa0804fb5bc5fd6dbd1146ccfc2d5c5b709e7c93f86484"} Jan 23 12:10:40 crc kubenswrapper[4865]: I0123 12:10:40.207404 4865 generic.go:334] "Generic (PLEG): container finished" podID="6a124ae5-cc93-44bb-9217-4d95a318568c" containerID="d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018" exitCode=0 Jan 23 12:10:40 crc kubenswrapper[4865]: I0123 12:10:40.207474 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" event={"ID":"6a124ae5-cc93-44bb-9217-4d95a318568c","Type":"ContainerDied","Data":"d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018"} Jan 23 12:10:41 crc kubenswrapper[4865]: I0123 12:10:41.215929 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" event={"ID":"6a124ae5-cc93-44bb-9217-4d95a318568c","Type":"ContainerStarted","Data":"b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393"} Jan 23 12:10:41 crc kubenswrapper[4865]: I0123 12:10:41.216266 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:45 crc kubenswrapper[4865]: I0123 12:10:45.245202 4865 generic.go:334] "Generic (PLEG): container finished" podID="e2f6099c-c8bb-4dfd-83ab-8b1084df2aee" containerID="ad7026dd1880909fa3617e0bc706c84f44e7de2456c653ffc29c3bcdcd365689" exitCode=0 Jan 23 12:10:45 crc kubenswrapper[4865]: I0123 12:10:45.245351 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xcpzg" event={"ID":"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee","Type":"ContainerDied","Data":"ad7026dd1880909fa3617e0bc706c84f44e7de2456c653ffc29c3bcdcd365689"} Jan 23 12:10:45 crc kubenswrapper[4865]: I0123 12:10:45.275799 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" podStartSLOduration=7.275769381 podStartE2EDuration="7.275769381s" podCreationTimestamp="2026-01-23 12:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:10:41.242009272 +0000 UTC m=+1085.411081498" watchObservedRunningTime="2026-01-23 12:10:45.275769381 +0000 UTC m=+1089.444841607" Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.658409 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xcpzg" Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.854041 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq58p\" (UniqueName: \"kubernetes.io/projected/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-kube-api-access-xq58p\") pod \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.854440 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-combined-ca-bundle\") pod \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.854487 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-db-sync-config-data\") pod \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.854511 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-config-data\") pod \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\" (UID: \"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee\") " Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.859781 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-kube-api-access-xq58p" (OuterVolumeSpecName: "kube-api-access-xq58p") pod "e2f6099c-c8bb-4dfd-83ab-8b1084df2aee" (UID: "e2f6099c-c8bb-4dfd-83ab-8b1084df2aee"). InnerVolumeSpecName "kube-api-access-xq58p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.859856 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e2f6099c-c8bb-4dfd-83ab-8b1084df2aee" (UID: "e2f6099c-c8bb-4dfd-83ab-8b1084df2aee"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.880561 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2f6099c-c8bb-4dfd-83ab-8b1084df2aee" (UID: "e2f6099c-c8bb-4dfd-83ab-8b1084df2aee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.928384 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-config-data" (OuterVolumeSpecName: "config-data") pod "e2f6099c-c8bb-4dfd-83ab-8b1084df2aee" (UID: "e2f6099c-c8bb-4dfd-83ab-8b1084df2aee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.956759 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.956804 4865 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.956814 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:46 crc kubenswrapper[4865]: I0123 12:10:46.956822 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq58p\" (UniqueName: \"kubernetes.io/projected/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee-kube-api-access-xq58p\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.267550 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xcpzg" event={"ID":"e2f6099c-c8bb-4dfd-83ab-8b1084df2aee","Type":"ContainerDied","Data":"1bd8505dd5dcbd3645b7f0d7ad4a10360a8e4e5453e2104a24c5d8f728f1ef9d"} Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.267621 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd8505dd5dcbd3645b7f0d7ad4a10360a8e4e5453e2104a24c5d8f728f1ef9d" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.267685 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xcpzg" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.725813 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dbdb8dfbf-spkmz"] Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.726055 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" podUID="6a124ae5-cc93-44bb-9217-4d95a318568c" containerName="dnsmasq-dns" containerID="cri-o://b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393" gracePeriod=10 Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.727753 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.780770 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c996dc455-pwf2q"] Jan 23 12:10:47 crc kubenswrapper[4865]: E0123 12:10:47.781135 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f6099c-c8bb-4dfd-83ab-8b1084df2aee" containerName="glance-db-sync" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.781152 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f6099c-c8bb-4dfd-83ab-8b1084df2aee" containerName="glance-db-sync" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.781330 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f6099c-c8bb-4dfd-83ab-8b1084df2aee" containerName="glance-db-sync" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.782974 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.828084 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c996dc455-pwf2q"] Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.975716 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-config\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.976308 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-sb\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.976519 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-svc\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.976716 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-swift-storage-0\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.976865 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-nb\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:47 crc kubenswrapper[4865]: I0123 12:10:47.976999 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2wb9\" (UniqueName: \"kubernetes.io/projected/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-kube-api-access-f2wb9\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.079375 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-svc\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.079435 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-swift-storage-0\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.079471 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-nb\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.079520 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2wb9\" (UniqueName: \"kubernetes.io/projected/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-kube-api-access-f2wb9\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.079560 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-config\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.079581 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-sb\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.080517 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-sb\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.081101 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-svc\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.081779 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-nb\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.081792 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-swift-storage-0\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.081978 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-config\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.114213 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2wb9\" (UniqueName: \"kubernetes.io/projected/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-kube-api-access-f2wb9\") pod \"dnsmasq-dns-c996dc455-pwf2q\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.256495 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.281324 4865 generic.go:334] "Generic (PLEG): container finished" podID="6a124ae5-cc93-44bb-9217-4d95a318568c" containerID="b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393" exitCode=0 Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.281391 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.281418 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" event={"ID":"6a124ae5-cc93-44bb-9217-4d95a318568c","Type":"ContainerDied","Data":"b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393"} Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.283458 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dbdb8dfbf-spkmz" event={"ID":"6a124ae5-cc93-44bb-9217-4d95a318568c","Type":"ContainerDied","Data":"443891b0385549c39baa0804fb5bc5fd6dbd1146ccfc2d5c5b709e7c93f86484"} Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.283560 4865 scope.go:117] "RemoveContainer" containerID="b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.344438 4865 scope.go:117] "RemoveContainer" containerID="d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.376835 4865 scope.go:117] "RemoveContainer" containerID="b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393" Jan 23 12:10:48 crc kubenswrapper[4865]: E0123 12:10:48.380814 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393\": container with ID starting with b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393 not found: ID does not exist" containerID="b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.380856 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393"} err="failed to get container status \"b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393\": rpc error: code = NotFound desc = could not find container \"b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393\": container with ID starting with b32ecf96af2062356f63e7efe8d3cc76a767f992ec4e2c84d7984a2209ef8393 not found: ID does not exist" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.380881 4865 scope.go:117] "RemoveContainer" containerID="d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018" Jan 23 12:10:48 crc kubenswrapper[4865]: E0123 12:10:48.381347 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018\": container with ID starting with d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018 not found: ID does not exist" containerID="d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.381389 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018"} err="failed to get container status \"d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018\": rpc error: code = NotFound desc = could not find container \"d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018\": container with ID starting with d083448eb8c25934ef290803d1647e255106b6e9577509ff432487aacef86018 not found: ID does not exist" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.384308 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-nb\") pod \"6a124ae5-cc93-44bb-9217-4d95a318568c\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.384553 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-svc\") pod \"6a124ae5-cc93-44bb-9217-4d95a318568c\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.384644 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-swift-storage-0\") pod \"6a124ae5-cc93-44bb-9217-4d95a318568c\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.384687 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgxnb\" (UniqueName: \"kubernetes.io/projected/6a124ae5-cc93-44bb-9217-4d95a318568c-kube-api-access-jgxnb\") pod \"6a124ae5-cc93-44bb-9217-4d95a318568c\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.384745 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-sb\") pod \"6a124ae5-cc93-44bb-9217-4d95a318568c\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.384777 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-config\") pod \"6a124ae5-cc93-44bb-9217-4d95a318568c\" (UID: \"6a124ae5-cc93-44bb-9217-4d95a318568c\") " Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.391960 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a124ae5-cc93-44bb-9217-4d95a318568c-kube-api-access-jgxnb" (OuterVolumeSpecName: "kube-api-access-jgxnb") pod "6a124ae5-cc93-44bb-9217-4d95a318568c" (UID: "6a124ae5-cc93-44bb-9217-4d95a318568c"). InnerVolumeSpecName "kube-api-access-jgxnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.406005 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.431998 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6a124ae5-cc93-44bb-9217-4d95a318568c" (UID: "6a124ae5-cc93-44bb-9217-4d95a318568c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.432128 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6a124ae5-cc93-44bb-9217-4d95a318568c" (UID: "6a124ae5-cc93-44bb-9217-4d95a318568c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.447593 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6a124ae5-cc93-44bb-9217-4d95a318568c" (UID: "6a124ae5-cc93-44bb-9217-4d95a318568c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.448455 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a124ae5-cc93-44bb-9217-4d95a318568c" (UID: "6a124ae5-cc93-44bb-9217-4d95a318568c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.453882 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-config" (OuterVolumeSpecName: "config") pod "6a124ae5-cc93-44bb-9217-4d95a318568c" (UID: "6a124ae5-cc93-44bb-9217-4d95a318568c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.485808 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.487759 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.487798 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.487810 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.487826 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgxnb\" (UniqueName: \"kubernetes.io/projected/6a124ae5-cc93-44bb-9217-4d95a318568c-kube-api-access-jgxnb\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.487837 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.487851 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a124ae5-cc93-44bb-9217-4d95a318568c-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.661287 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dbdb8dfbf-spkmz"] Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.676284 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dbdb8dfbf-spkmz"] Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.815812 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 12:10:48 crc kubenswrapper[4865]: I0123 12:10:48.962790 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c996dc455-pwf2q"] Jan 23 12:10:49 crc kubenswrapper[4865]: I0123 12:10:49.291242 4865 generic.go:334] "Generic (PLEG): container finished" podID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerID="afbb4a64e4bb4f0d845051e9923efdaec8d582d3ba6463f4b1125771b319df76" exitCode=0 Jan 23 12:10:49 crc kubenswrapper[4865]: I0123 12:10:49.291386 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" event={"ID":"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5","Type":"ContainerDied","Data":"afbb4a64e4bb4f0d845051e9923efdaec8d582d3ba6463f4b1125771b319df76"} Jan 23 12:10:49 crc kubenswrapper[4865]: I0123 12:10:49.291676 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" event={"ID":"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5","Type":"ContainerStarted","Data":"3ed433f5a667064b682d158867e8d2203268ab14e685f0398f86b589d01ff3de"} Jan 23 12:10:50 crc kubenswrapper[4865]: I0123 12:10:50.128952 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a124ae5-cc93-44bb-9217-4d95a318568c" path="/var/lib/kubelet/pods/6a124ae5-cc93-44bb-9217-4d95a318568c/volumes" Jan 23 12:10:50 crc kubenswrapper[4865]: I0123 12:10:50.305531 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" event={"ID":"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5","Type":"ContainerStarted","Data":"910dfc6635f1736615be07a4a6897013d42c0c2b0437524c3a4d62a4b4884e17"} Jan 23 12:10:50 crc kubenswrapper[4865]: I0123 12:10:50.305891 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.034673 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" podStartSLOduration=4.03465505 podStartE2EDuration="4.03465505s" podCreationTimestamp="2026-01-23 12:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:10:50.333334196 +0000 UTC m=+1094.502406432" watchObservedRunningTime="2026-01-23 12:10:51.03465505 +0000 UTC m=+1095.203727276" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.039459 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-ae31-account-create-update-m7fmp"] Jan 23 12:10:51 crc kubenswrapper[4865]: E0123 12:10:51.039928 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a124ae5-cc93-44bb-9217-4d95a318568c" containerName="dnsmasq-dns" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.039947 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a124ae5-cc93-44bb-9217-4d95a318568c" containerName="dnsmasq-dns" Jan 23 12:10:51 crc kubenswrapper[4865]: E0123 12:10:51.039964 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a124ae5-cc93-44bb-9217-4d95a318568c" containerName="init" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.039972 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a124ae5-cc93-44bb-9217-4d95a318568c" containerName="init" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.040130 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a124ae5-cc93-44bb-9217-4d95a318568c" containerName="dnsmasq-dns" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.040833 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ae31-account-create-update-m7fmp" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.043211 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.059757 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-mhlkc"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.061985 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mhlkc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.075987 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-mhlkc"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.110988 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-ae31-account-create-update-m7fmp"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.157731 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a5ba781-e3a4-4458-918c-816f636b14bf-operator-scripts\") pod \"heat-db-create-mhlkc\" (UID: \"0a5ba781-e3a4-4458-918c-816f636b14bf\") " pod="openstack/heat-db-create-mhlkc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.157846 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-operator-scripts\") pod \"heat-ae31-account-create-update-m7fmp\" (UID: \"af312aba-bce9-4e7c-a761-8ab57e0bb3e3\") " pod="openstack/heat-ae31-account-create-update-m7fmp" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.157914 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnb8z\" (UniqueName: \"kubernetes.io/projected/0a5ba781-e3a4-4458-918c-816f636b14bf-kube-api-access-jnb8z\") pod \"heat-db-create-mhlkc\" (UID: \"0a5ba781-e3a4-4458-918c-816f636b14bf\") " pod="openstack/heat-db-create-mhlkc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.157948 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxtwp\" (UniqueName: \"kubernetes.io/projected/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-kube-api-access-hxtwp\") pod \"heat-ae31-account-create-update-m7fmp\" (UID: \"af312aba-bce9-4e7c-a761-8ab57e0bb3e3\") " pod="openstack/heat-ae31-account-create-update-m7fmp" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.212018 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-bg7bm"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.213235 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bg7bm" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.230100 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bg7bm"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.260183 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnb8z\" (UniqueName: \"kubernetes.io/projected/0a5ba781-e3a4-4458-918c-816f636b14bf-kube-api-access-jnb8z\") pod \"heat-db-create-mhlkc\" (UID: \"0a5ba781-e3a4-4458-918c-816f636b14bf\") " pod="openstack/heat-db-create-mhlkc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.260241 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxtwp\" (UniqueName: \"kubernetes.io/projected/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-kube-api-access-hxtwp\") pod \"heat-ae31-account-create-update-m7fmp\" (UID: \"af312aba-bce9-4e7c-a761-8ab57e0bb3e3\") " pod="openstack/heat-ae31-account-create-update-m7fmp" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.260307 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a5ba781-e3a4-4458-918c-816f636b14bf-operator-scripts\") pod \"heat-db-create-mhlkc\" (UID: \"0a5ba781-e3a4-4458-918c-816f636b14bf\") " pod="openstack/heat-db-create-mhlkc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.260358 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-operator-scripts\") pod \"heat-ae31-account-create-update-m7fmp\" (UID: \"af312aba-bce9-4e7c-a761-8ab57e0bb3e3\") " pod="openstack/heat-ae31-account-create-update-m7fmp" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.260400 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86e93d72-830a-4415-b23a-91c49115233f-operator-scripts\") pod \"cinder-db-create-bg7bm\" (UID: \"86e93d72-830a-4415-b23a-91c49115233f\") " pod="openstack/cinder-db-create-bg7bm" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.260421 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5q2m\" (UniqueName: \"kubernetes.io/projected/86e93d72-830a-4415-b23a-91c49115233f-kube-api-access-f5q2m\") pod \"cinder-db-create-bg7bm\" (UID: \"86e93d72-830a-4415-b23a-91c49115233f\") " pod="openstack/cinder-db-create-bg7bm" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.261687 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a5ba781-e3a4-4458-918c-816f636b14bf-operator-scripts\") pod \"heat-db-create-mhlkc\" (UID: \"0a5ba781-e3a4-4458-918c-816f636b14bf\") " pod="openstack/heat-db-create-mhlkc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.262365 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-operator-scripts\") pod \"heat-ae31-account-create-update-m7fmp\" (UID: \"af312aba-bce9-4e7c-a761-8ab57e0bb3e3\") " pod="openstack/heat-ae31-account-create-update-m7fmp" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.289161 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxtwp\" (UniqueName: \"kubernetes.io/projected/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-kube-api-access-hxtwp\") pod \"heat-ae31-account-create-update-m7fmp\" (UID: \"af312aba-bce9-4e7c-a761-8ab57e0bb3e3\") " pod="openstack/heat-ae31-account-create-update-m7fmp" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.307061 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnb8z\" (UniqueName: \"kubernetes.io/projected/0a5ba781-e3a4-4458-918c-816f636b14bf-kube-api-access-jnb8z\") pod \"heat-db-create-mhlkc\" (UID: \"0a5ba781-e3a4-4458-918c-816f636b14bf\") " pod="openstack/heat-db-create-mhlkc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.362479 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ae31-account-create-update-m7fmp" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.363426 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86e93d72-830a-4415-b23a-91c49115233f-operator-scripts\") pod \"cinder-db-create-bg7bm\" (UID: \"86e93d72-830a-4415-b23a-91c49115233f\") " pod="openstack/cinder-db-create-bg7bm" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.363502 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5q2m\" (UniqueName: \"kubernetes.io/projected/86e93d72-830a-4415-b23a-91c49115233f-kube-api-access-f5q2m\") pod \"cinder-db-create-bg7bm\" (UID: \"86e93d72-830a-4415-b23a-91c49115233f\") " pod="openstack/cinder-db-create-bg7bm" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.364988 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86e93d72-830a-4415-b23a-91c49115233f-operator-scripts\") pod \"cinder-db-create-bg7bm\" (UID: \"86e93d72-830a-4415-b23a-91c49115233f\") " pod="openstack/cinder-db-create-bg7bm" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.367363 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2079-account-create-update-xtn8h"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.369138 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2079-account-create-update-xtn8h" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.374482 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.390990 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mhlkc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.394538 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2079-account-create-update-xtn8h"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.405358 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5q2m\" (UniqueName: \"kubernetes.io/projected/86e93d72-830a-4415-b23a-91c49115233f-kube-api-access-f5q2m\") pod \"cinder-db-create-bg7bm\" (UID: \"86e93d72-830a-4415-b23a-91c49115233f\") " pod="openstack/cinder-db-create-bg7bm" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.464581 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-operator-scripts\") pod \"cinder-2079-account-create-update-xtn8h\" (UID: \"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4\") " pod="openstack/cinder-2079-account-create-update-xtn8h" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.464720 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wg2z\" (UniqueName: \"kubernetes.io/projected/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-kube-api-access-4wg2z\") pod \"cinder-2079-account-create-update-xtn8h\" (UID: \"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4\") " pod="openstack/cinder-2079-account-create-update-xtn8h" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.532694 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bg7bm" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.577759 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wg2z\" (UniqueName: \"kubernetes.io/projected/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-kube-api-access-4wg2z\") pod \"cinder-2079-account-create-update-xtn8h\" (UID: \"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4\") " pod="openstack/cinder-2079-account-create-update-xtn8h" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.578071 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-operator-scripts\") pod \"cinder-2079-account-create-update-xtn8h\" (UID: \"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4\") " pod="openstack/cinder-2079-account-create-update-xtn8h" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.583985 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-operator-scripts\") pod \"cinder-2079-account-create-update-xtn8h\" (UID: \"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4\") " pod="openstack/cinder-2079-account-create-update-xtn8h" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.595733 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-glvsw"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.603819 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-glvsw" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.628924 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wg2z\" (UniqueName: \"kubernetes.io/projected/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-kube-api-access-4wg2z\") pod \"cinder-2079-account-create-update-xtn8h\" (UID: \"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4\") " pod="openstack/cinder-2079-account-create-update-xtn8h" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.648362 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-27ee-account-create-update-twq6z"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.653727 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-27ee-account-create-update-twq6z" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.658219 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.667666 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-glvsw"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.692084 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-g94f9"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.693404 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.710143 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.710503 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nlns" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.710716 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.710883 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.711505 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-27ee-account-create-update-twq6z"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.729021 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-g94f9"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.756053 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-l2bdc"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.757355 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-l2bdc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.764199 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-l2bdc"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.787265 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcfn5\" (UniqueName: \"kubernetes.io/projected/47bf79e7-dbca-4568-877d-82d13222755e-kube-api-access-fcfn5\") pod \"barbican-27ee-account-create-update-twq6z\" (UID: \"47bf79e7-dbca-4568-877d-82d13222755e\") " pod="openstack/barbican-27ee-account-create-update-twq6z" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.787324 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gbcn\" (UniqueName: \"kubernetes.io/projected/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-kube-api-access-9gbcn\") pod \"keystone-db-sync-g94f9\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.787643 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-combined-ca-bundle\") pod \"keystone-db-sync-g94f9\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.787887 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgwqx\" (UniqueName: \"kubernetes.io/projected/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-kube-api-access-hgwqx\") pod \"barbican-db-create-glvsw\" (UID: \"5e83a6fe-4aab-44f6-a5c3-1d2afd376278\") " pod="openstack/barbican-db-create-glvsw" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.788053 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-operator-scripts\") pod \"barbican-db-create-glvsw\" (UID: \"5e83a6fe-4aab-44f6-a5c3-1d2afd376278\") " pod="openstack/barbican-db-create-glvsw" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.788200 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-config-data\") pod \"keystone-db-sync-g94f9\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.788263 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47bf79e7-dbca-4568-877d-82d13222755e-operator-scripts\") pod \"barbican-27ee-account-create-update-twq6z\" (UID: \"47bf79e7-dbca-4568-877d-82d13222755e\") " pod="openstack/barbican-27ee-account-create-update-twq6z" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.871495 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2079-account-create-update-xtn8h" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.889861 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-operator-scripts\") pod \"barbican-db-create-glvsw\" (UID: \"5e83a6fe-4aab-44f6-a5c3-1d2afd376278\") " pod="openstack/barbican-db-create-glvsw" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.889928 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-config-data\") pod \"keystone-db-sync-g94f9\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.889960 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47bf79e7-dbca-4568-877d-82d13222755e-operator-scripts\") pod \"barbican-27ee-account-create-update-twq6z\" (UID: \"47bf79e7-dbca-4568-877d-82d13222755e\") " pod="openstack/barbican-27ee-account-create-update-twq6z" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.889986 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcfn5\" (UniqueName: \"kubernetes.io/projected/47bf79e7-dbca-4568-877d-82d13222755e-kube-api-access-fcfn5\") pod \"barbican-27ee-account-create-update-twq6z\" (UID: \"47bf79e7-dbca-4568-877d-82d13222755e\") " pod="openstack/barbican-27ee-account-create-update-twq6z" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.890015 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gbcn\" (UniqueName: \"kubernetes.io/projected/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-kube-api-access-9gbcn\") pod \"keystone-db-sync-g94f9\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.890068 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de0053c-5a7e-4b59-a93a-90a9073cfa30-operator-scripts\") pod \"neutron-db-create-l2bdc\" (UID: \"5de0053c-5a7e-4b59-a93a-90a9073cfa30\") " pod="openstack/neutron-db-create-l2bdc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.890088 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-combined-ca-bundle\") pod \"keystone-db-sync-g94f9\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.890125 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcs9r\" (UniqueName: \"kubernetes.io/projected/5de0053c-5a7e-4b59-a93a-90a9073cfa30-kube-api-access-kcs9r\") pod \"neutron-db-create-l2bdc\" (UID: \"5de0053c-5a7e-4b59-a93a-90a9073cfa30\") " pod="openstack/neutron-db-create-l2bdc" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.890150 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgwqx\" (UniqueName: \"kubernetes.io/projected/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-kube-api-access-hgwqx\") pod \"barbican-db-create-glvsw\" (UID: \"5e83a6fe-4aab-44f6-a5c3-1d2afd376278\") " pod="openstack/barbican-db-create-glvsw" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.895309 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47bf79e7-dbca-4568-877d-82d13222755e-operator-scripts\") pod \"barbican-27ee-account-create-update-twq6z\" (UID: \"47bf79e7-dbca-4568-877d-82d13222755e\") " pod="openstack/barbican-27ee-account-create-update-twq6z" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.896400 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-config-data\") pod \"keystone-db-sync-g94f9\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.896937 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-operator-scripts\") pod \"barbican-db-create-glvsw\" (UID: \"5e83a6fe-4aab-44f6-a5c3-1d2afd376278\") " pod="openstack/barbican-db-create-glvsw" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.907041 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-combined-ca-bundle\") pod \"keystone-db-sync-g94f9\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.914532 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4c64-account-create-update-q7s48"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.916100 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4c64-account-create-update-q7s48" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.921100 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcfn5\" (UniqueName: \"kubernetes.io/projected/47bf79e7-dbca-4568-877d-82d13222755e-kube-api-access-fcfn5\") pod \"barbican-27ee-account-create-update-twq6z\" (UID: \"47bf79e7-dbca-4568-877d-82d13222755e\") " pod="openstack/barbican-27ee-account-create-update-twq6z" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.929465 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgwqx\" (UniqueName: \"kubernetes.io/projected/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-kube-api-access-hgwqx\") pod \"barbican-db-create-glvsw\" (UID: \"5e83a6fe-4aab-44f6-a5c3-1d2afd376278\") " pod="openstack/barbican-db-create-glvsw" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.935442 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.943709 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4c64-account-create-update-q7s48"] Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.945321 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gbcn\" (UniqueName: \"kubernetes.io/projected/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-kube-api-access-9gbcn\") pod \"keystone-db-sync-g94f9\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.962533 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-glvsw" Jan 23 12:10:51 crc kubenswrapper[4865]: I0123 12:10:51.995042 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcs9r\" (UniqueName: \"kubernetes.io/projected/5de0053c-5a7e-4b59-a93a-90a9073cfa30-kube-api-access-kcs9r\") pod \"neutron-db-create-l2bdc\" (UID: \"5de0053c-5a7e-4b59-a93a-90a9073cfa30\") " pod="openstack/neutron-db-create-l2bdc" Jan 23 12:10:52 crc kubenswrapper[4865]: I0123 12:10:52.006203 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-27ee-account-create-update-twq6z" Jan 23 12:10:52 crc kubenswrapper[4865]: I0123 12:10:52.013053 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de0053c-5a7e-4b59-a93a-90a9073cfa30-operator-scripts\") pod \"neutron-db-create-l2bdc\" (UID: \"5de0053c-5a7e-4b59-a93a-90a9073cfa30\") " pod="openstack/neutron-db-create-l2bdc" Jan 23 12:10:52 crc kubenswrapper[4865]: I0123 12:10:52.013971 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de0053c-5a7e-4b59-a93a-90a9073cfa30-operator-scripts\") pod \"neutron-db-create-l2bdc\" (UID: \"5de0053c-5a7e-4b59-a93a-90a9073cfa30\") " pod="openstack/neutron-db-create-l2bdc" Jan 23 12:10:52 crc kubenswrapper[4865]: I0123 12:10:52.030651 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-g94f9" Jan 23 12:10:52 crc kubenswrapper[4865]: I0123 12:10:52.036514 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcs9r\" (UniqueName: \"kubernetes.io/projected/5de0053c-5a7e-4b59-a93a-90a9073cfa30-kube-api-access-kcs9r\") pod \"neutron-db-create-l2bdc\" (UID: \"5de0053c-5a7e-4b59-a93a-90a9073cfa30\") " pod="openstack/neutron-db-create-l2bdc" Jan 23 12:10:52 crc kubenswrapper[4865]: I0123 12:10:52.085013 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-l2bdc" Jan 23 12:10:52 crc kubenswrapper[4865]: I0123 12:10:52.114682 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fbdceb-1c27-4507-8317-fa5b8e427716-operator-scripts\") pod \"neutron-4c64-account-create-update-q7s48\" (UID: \"81fbdceb-1c27-4507-8317-fa5b8e427716\") " pod="openstack/neutron-4c64-account-create-update-q7s48" Jan 23 12:10:52 crc kubenswrapper[4865]: I0123 12:10:52.114739 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxjfk\" (UniqueName: \"kubernetes.io/projected/81fbdceb-1c27-4507-8317-fa5b8e427716-kube-api-access-cxjfk\") pod \"neutron-4c64-account-create-update-q7s48\" (UID: \"81fbdceb-1c27-4507-8317-fa5b8e427716\") " pod="openstack/neutron-4c64-account-create-update-q7s48" Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:52.225054 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fbdceb-1c27-4507-8317-fa5b8e427716-operator-scripts\") pod \"neutron-4c64-account-create-update-q7s48\" (UID: \"81fbdceb-1c27-4507-8317-fa5b8e427716\") " pod="openstack/neutron-4c64-account-create-update-q7s48" Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:52.225104 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxjfk\" (UniqueName: \"kubernetes.io/projected/81fbdceb-1c27-4507-8317-fa5b8e427716-kube-api-access-cxjfk\") pod \"neutron-4c64-account-create-update-q7s48\" (UID: \"81fbdceb-1c27-4507-8317-fa5b8e427716\") " pod="openstack/neutron-4c64-account-create-update-q7s48" Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:52.229359 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fbdceb-1c27-4507-8317-fa5b8e427716-operator-scripts\") pod \"neutron-4c64-account-create-update-q7s48\" (UID: \"81fbdceb-1c27-4507-8317-fa5b8e427716\") " pod="openstack/neutron-4c64-account-create-update-q7s48" Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:52.254276 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxjfk\" (UniqueName: \"kubernetes.io/projected/81fbdceb-1c27-4507-8317-fa5b8e427716-kube-api-access-cxjfk\") pod \"neutron-4c64-account-create-update-q7s48\" (UID: \"81fbdceb-1c27-4507-8317-fa5b8e427716\") " pod="openstack/neutron-4c64-account-create-update-q7s48" Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:52.546405 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4c64-account-create-update-q7s48" Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:52.575921 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-mhlkc"] Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.267053 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-ae31-account-create-update-m7fmp"] Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.383812 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mhlkc" event={"ID":"0a5ba781-e3a4-4458-918c-816f636b14bf","Type":"ContainerStarted","Data":"bef4710816c7f1e139a5517bef0e19f0a983e6265f22dd954056de25dfa1ecd0"} Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.383947 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mhlkc" event={"ID":"0a5ba781-e3a4-4458-918c-816f636b14bf","Type":"ContainerStarted","Data":"b2cd9a067d9e42f7cf20110dd73ebd20d29043c38d4b430f491b7b9d88c66a68"} Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.386945 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-ae31-account-create-update-m7fmp" event={"ID":"af312aba-bce9-4e7c-a761-8ab57e0bb3e3","Type":"ContainerStarted","Data":"6f506e393c5ccecb827ae0f87c13d704ff5ae5850b1f37a4f0983cef18739c69"} Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.450680 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-mhlkc" podStartSLOduration=2.450642475 podStartE2EDuration="2.450642475s" podCreationTimestamp="2026-01-23 12:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:10:53.401888899 +0000 UTC m=+1097.570961125" watchObservedRunningTime="2026-01-23 12:10:53.450642475 +0000 UTC m=+1097.619714701" Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.462321 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-bg7bm"] Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.821302 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-g94f9"] Jan 23 12:10:53 crc kubenswrapper[4865]: W0123 12:10:53.835126 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9dbd2af_b7e3_40cb_9b53_527c5a03a3b6.slice/crio-946be756f589eb4c05d2d2fdee1668a2e11f6f5d3cdfbdef4c6c40caf8bc010a WatchSource:0}: Error finding container 946be756f589eb4c05d2d2fdee1668a2e11f6f5d3cdfbdef4c6c40caf8bc010a: Status 404 returned error can't find the container with id 946be756f589eb4c05d2d2fdee1668a2e11f6f5d3cdfbdef4c6c40caf8bc010a Jan 23 12:10:53 crc kubenswrapper[4865]: W0123 12:10:53.850795 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e83a6fe_4aab_44f6_a5c3_1d2afd376278.slice/crio-24b893ef969143f494324ecdff20379ddf6da026afe93d1d2bd16f839a0e8b40 WatchSource:0}: Error finding container 24b893ef969143f494324ecdff20379ddf6da026afe93d1d2bd16f839a0e8b40: Status 404 returned error can't find the container with id 24b893ef969143f494324ecdff20379ddf6da026afe93d1d2bd16f839a0e8b40 Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.862744 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2079-account-create-update-xtn8h"] Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.879318 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-glvsw"] Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.895022 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-27ee-account-create-update-twq6z"] Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.906490 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-l2bdc"] Jan 23 12:10:53 crc kubenswrapper[4865]: I0123 12:10:53.913741 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4c64-account-create-update-q7s48"] Jan 23 12:10:53 crc kubenswrapper[4865]: W0123 12:10:53.948780 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5de0053c_5a7e_4b59_a93a_90a9073cfa30.slice/crio-55729e550fd00df13207763f9fe480af536d2aa0227fa53db2c11cb113e68220 WatchSource:0}: Error finding container 55729e550fd00df13207763f9fe480af536d2aa0227fa53db2c11cb113e68220: Status 404 returned error can't find the container with id 55729e550fd00df13207763f9fe480af536d2aa0227fa53db2c11cb113e68220 Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.407252 4865 generic.go:334] "Generic (PLEG): container finished" podID="0a5ba781-e3a4-4458-918c-816f636b14bf" containerID="bef4710816c7f1e139a5517bef0e19f0a983e6265f22dd954056de25dfa1ecd0" exitCode=0 Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.409223 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mhlkc" event={"ID":"0a5ba781-e3a4-4458-918c-816f636b14bf","Type":"ContainerDied","Data":"bef4710816c7f1e139a5517bef0e19f0a983e6265f22dd954056de25dfa1ecd0"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.426190 4865 generic.go:334] "Generic (PLEG): container finished" podID="e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4" containerID="b743afdd02cb21c9ec48954c190e17965245585a53e682fd2f335376810b8959" exitCode=0 Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.426275 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2079-account-create-update-xtn8h" event={"ID":"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4","Type":"ContainerDied","Data":"b743afdd02cb21c9ec48954c190e17965245585a53e682fd2f335376810b8959"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.426301 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2079-account-create-update-xtn8h" event={"ID":"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4","Type":"ContainerStarted","Data":"fde9db4f44c3f6ed5f7945a9140fdd6075f3041e8bc85b082020fc19e219f0da"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.428056 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-l2bdc" event={"ID":"5de0053c-5a7e-4b59-a93a-90a9073cfa30","Type":"ContainerStarted","Data":"11fdab9ab1a6ffa11537d1ac2f703a49501f8ae83d85953bb7bf4497bfd3a75b"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.428081 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-l2bdc" event={"ID":"5de0053c-5a7e-4b59-a93a-90a9073cfa30","Type":"ContainerStarted","Data":"55729e550fd00df13207763f9fe480af536d2aa0227fa53db2c11cb113e68220"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.430962 4865 generic.go:334] "Generic (PLEG): container finished" podID="af312aba-bce9-4e7c-a761-8ab57e0bb3e3" containerID="6bf4c5d697c8214e2ebbcfb60e57e22362f8fac5b514a722ae2d275a0d8bf32e" exitCode=0 Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.431012 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-ae31-account-create-update-m7fmp" event={"ID":"af312aba-bce9-4e7c-a761-8ab57e0bb3e3","Type":"ContainerDied","Data":"6bf4c5d697c8214e2ebbcfb60e57e22362f8fac5b514a722ae2d275a0d8bf32e"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.433456 4865 generic.go:334] "Generic (PLEG): container finished" podID="86e93d72-830a-4415-b23a-91c49115233f" containerID="584221033cd94411f9570ed0a13175f185942a3b4abad2102c446ebd502aa4a2" exitCode=0 Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.433497 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bg7bm" event={"ID":"86e93d72-830a-4415-b23a-91c49115233f","Type":"ContainerDied","Data":"584221033cd94411f9570ed0a13175f185942a3b4abad2102c446ebd502aa4a2"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.433513 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bg7bm" event={"ID":"86e93d72-830a-4415-b23a-91c49115233f","Type":"ContainerStarted","Data":"fdcd24b076483b7bed553aed30242377dbc8518c1620b967092dfeb984d9e365"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.435199 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4c64-account-create-update-q7s48" event={"ID":"81fbdceb-1c27-4507-8317-fa5b8e427716","Type":"ContainerStarted","Data":"d7078b3515259ead102e0e234fafac976acfe2d86ef701cc0d754d20c6af2acf"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.435219 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4c64-account-create-update-q7s48" event={"ID":"81fbdceb-1c27-4507-8317-fa5b8e427716","Type":"ContainerStarted","Data":"1bf915da2895eb914ce7c75a0d6418eeeb7bec68c6891d531acfdb9082ce0b28"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.445167 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-27ee-account-create-update-twq6z" event={"ID":"47bf79e7-dbca-4568-877d-82d13222755e","Type":"ContainerStarted","Data":"e3d3312765c37a25a5186fca3d2a2b049e7bc843cd0e02a2e10a5d1a07eb5fcb"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.445208 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-27ee-account-create-update-twq6z" event={"ID":"47bf79e7-dbca-4568-877d-82d13222755e","Type":"ContainerStarted","Data":"e05c89f306a4f86dba801af0e9d35b9ac4e2f2196740530eb93134ebf16db978"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.447392 4865 generic.go:334] "Generic (PLEG): container finished" podID="5e83a6fe-4aab-44f6-a5c3-1d2afd376278" containerID="dcfe04daeaa1f237711277bc1b564d1c87ff1f304f7c3d5a700227d7806e64c7" exitCode=0 Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.447435 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-glvsw" event={"ID":"5e83a6fe-4aab-44f6-a5c3-1d2afd376278","Type":"ContainerDied","Data":"dcfe04daeaa1f237711277bc1b564d1c87ff1f304f7c3d5a700227d7806e64c7"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.447455 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-glvsw" event={"ID":"5e83a6fe-4aab-44f6-a5c3-1d2afd376278","Type":"ContainerStarted","Data":"24b893ef969143f494324ecdff20379ddf6da026afe93d1d2bd16f839a0e8b40"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.449557 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-g94f9" event={"ID":"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6","Type":"ContainerStarted","Data":"946be756f589eb4c05d2d2fdee1668a2e11f6f5d3cdfbdef4c6c40caf8bc010a"} Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.462865 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-4c64-account-create-update-q7s48" podStartSLOduration=3.462846445 podStartE2EDuration="3.462846445s" podCreationTimestamp="2026-01-23 12:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:10:54.457156934 +0000 UTC m=+1098.626229160" watchObservedRunningTime="2026-01-23 12:10:54.462846445 +0000 UTC m=+1098.631918671" Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.522899 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-l2bdc" podStartSLOduration=3.522881366 podStartE2EDuration="3.522881366s" podCreationTimestamp="2026-01-23 12:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:10:54.520132319 +0000 UTC m=+1098.689204555" watchObservedRunningTime="2026-01-23 12:10:54.522881366 +0000 UTC m=+1098.691953592" Jan 23 12:10:54 crc kubenswrapper[4865]: I0123 12:10:54.545483 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-27ee-account-create-update-twq6z" podStartSLOduration=3.545461111 podStartE2EDuration="3.545461111s" podCreationTimestamp="2026-01-23 12:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:10:54.533244261 +0000 UTC m=+1098.702316487" watchObservedRunningTime="2026-01-23 12:10:54.545461111 +0000 UTC m=+1098.714533337" Jan 23 12:10:55 crc kubenswrapper[4865]: I0123 12:10:55.471271 4865 generic.go:334] "Generic (PLEG): container finished" podID="81fbdceb-1c27-4507-8317-fa5b8e427716" containerID="d7078b3515259ead102e0e234fafac976acfe2d86ef701cc0d754d20c6af2acf" exitCode=0 Jan 23 12:10:55 crc kubenswrapper[4865]: I0123 12:10:55.471321 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4c64-account-create-update-q7s48" event={"ID":"81fbdceb-1c27-4507-8317-fa5b8e427716","Type":"ContainerDied","Data":"d7078b3515259ead102e0e234fafac976acfe2d86ef701cc0d754d20c6af2acf"} Jan 23 12:10:55 crc kubenswrapper[4865]: I0123 12:10:55.478351 4865 generic.go:334] "Generic (PLEG): container finished" podID="47bf79e7-dbca-4568-877d-82d13222755e" containerID="e3d3312765c37a25a5186fca3d2a2b049e7bc843cd0e02a2e10a5d1a07eb5fcb" exitCode=0 Jan 23 12:10:55 crc kubenswrapper[4865]: I0123 12:10:55.478459 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-27ee-account-create-update-twq6z" event={"ID":"47bf79e7-dbca-4568-877d-82d13222755e","Type":"ContainerDied","Data":"e3d3312765c37a25a5186fca3d2a2b049e7bc843cd0e02a2e10a5d1a07eb5fcb"} Jan 23 12:10:55 crc kubenswrapper[4865]: I0123 12:10:55.480144 4865 generic.go:334] "Generic (PLEG): container finished" podID="5de0053c-5a7e-4b59-a93a-90a9073cfa30" containerID="11fdab9ab1a6ffa11537d1ac2f703a49501f8ae83d85953bb7bf4497bfd3a75b" exitCode=0 Jan 23 12:10:55 crc kubenswrapper[4865]: I0123 12:10:55.480220 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-l2bdc" event={"ID":"5de0053c-5a7e-4b59-a93a-90a9073cfa30","Type":"ContainerDied","Data":"11fdab9ab1a6ffa11537d1ac2f703a49501f8ae83d85953bb7bf4497bfd3a75b"} Jan 23 12:10:58 crc kubenswrapper[4865]: I0123 12:10:58.407797 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:10:58 crc kubenswrapper[4865]: I0123 12:10:58.472828 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d5cd98cbf-9gkpm"] Jan 23 12:10:58 crc kubenswrapper[4865]: I0123 12:10:58.473081 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" podUID="613af722-ab75-4bff-a8b4-e7a19792c417" containerName="dnsmasq-dns" containerID="cri-o://2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa" gracePeriod=10 Jan 23 12:10:58 crc kubenswrapper[4865]: I0123 12:10:58.942486 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ae31-account-create-update-m7fmp" Jan 23 12:10:58 crc kubenswrapper[4865]: I0123 12:10:58.950913 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2079-account-create-update-xtn8h" Jan 23 12:10:58 crc kubenswrapper[4865]: I0123 12:10:58.981839 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bg7bm" Jan 23 12:10:58 crc kubenswrapper[4865]: I0123 12:10:58.992116 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-l2bdc" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.005480 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-glvsw" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.027979 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mhlkc" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.104278 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-27ee-account-create-update-twq6z" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.112014 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86e93d72-830a-4415-b23a-91c49115233f-operator-scripts\") pod \"86e93d72-830a-4415-b23a-91c49115233f\" (UID: \"86e93d72-830a-4415-b23a-91c49115233f\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.112060 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-operator-scripts\") pod \"af312aba-bce9-4e7c-a761-8ab57e0bb3e3\" (UID: \"af312aba-bce9-4e7c-a761-8ab57e0bb3e3\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.112100 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxtwp\" (UniqueName: \"kubernetes.io/projected/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-kube-api-access-hxtwp\") pod \"af312aba-bce9-4e7c-a761-8ab57e0bb3e3\" (UID: \"af312aba-bce9-4e7c-a761-8ab57e0bb3e3\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.112225 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-operator-scripts\") pod \"5e83a6fe-4aab-44f6-a5c3-1d2afd376278\" (UID: \"5e83a6fe-4aab-44f6-a5c3-1d2afd376278\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.112265 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcs9r\" (UniqueName: \"kubernetes.io/projected/5de0053c-5a7e-4b59-a93a-90a9073cfa30-kube-api-access-kcs9r\") pod \"5de0053c-5a7e-4b59-a93a-90a9073cfa30\" (UID: \"5de0053c-5a7e-4b59-a93a-90a9073cfa30\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.112330 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-operator-scripts\") pod \"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4\" (UID: \"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.112397 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgwqx\" (UniqueName: \"kubernetes.io/projected/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-kube-api-access-hgwqx\") pod \"5e83a6fe-4aab-44f6-a5c3-1d2afd376278\" (UID: \"5e83a6fe-4aab-44f6-a5c3-1d2afd376278\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.112450 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5q2m\" (UniqueName: \"kubernetes.io/projected/86e93d72-830a-4415-b23a-91c49115233f-kube-api-access-f5q2m\") pod \"86e93d72-830a-4415-b23a-91c49115233f\" (UID: \"86e93d72-830a-4415-b23a-91c49115233f\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.112476 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wg2z\" (UniqueName: \"kubernetes.io/projected/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-kube-api-access-4wg2z\") pod \"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4\" (UID: \"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.112503 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de0053c-5a7e-4b59-a93a-90a9073cfa30-operator-scripts\") pod \"5de0053c-5a7e-4b59-a93a-90a9073cfa30\" (UID: \"5de0053c-5a7e-4b59-a93a-90a9073cfa30\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.113341 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5de0053c-5a7e-4b59-a93a-90a9073cfa30-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5de0053c-5a7e-4b59-a93a-90a9073cfa30" (UID: "5de0053c-5a7e-4b59-a93a-90a9073cfa30"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.114017 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86e93d72-830a-4415-b23a-91c49115233f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86e93d72-830a-4415-b23a-91c49115233f" (UID: "86e93d72-830a-4415-b23a-91c49115233f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.114353 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4" (UID: "e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.114710 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e83a6fe-4aab-44f6-a5c3-1d2afd376278" (UID: "5e83a6fe-4aab-44f6-a5c3-1d2afd376278"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.115104 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af312aba-bce9-4e7c-a761-8ab57e0bb3e3" (UID: "af312aba-bce9-4e7c-a761-8ab57e0bb3e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.120302 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86e93d72-830a-4415-b23a-91c49115233f-kube-api-access-f5q2m" (OuterVolumeSpecName: "kube-api-access-f5q2m") pod "86e93d72-830a-4415-b23a-91c49115233f" (UID: "86e93d72-830a-4415-b23a-91c49115233f"). InnerVolumeSpecName "kube-api-access-f5q2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.124935 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de0053c-5a7e-4b59-a93a-90a9073cfa30-kube-api-access-kcs9r" (OuterVolumeSpecName: "kube-api-access-kcs9r") pod "5de0053c-5a7e-4b59-a93a-90a9073cfa30" (UID: "5de0053c-5a7e-4b59-a93a-90a9073cfa30"). InnerVolumeSpecName "kube-api-access-kcs9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.125144 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-kube-api-access-4wg2z" (OuterVolumeSpecName: "kube-api-access-4wg2z") pod "e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4" (UID: "e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4"). InnerVolumeSpecName "kube-api-access-4wg2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.125245 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-kube-api-access-hxtwp" (OuterVolumeSpecName: "kube-api-access-hxtwp") pod "af312aba-bce9-4e7c-a761-8ab57e0bb3e3" (UID: "af312aba-bce9-4e7c-a761-8ab57e0bb3e3"). InnerVolumeSpecName "kube-api-access-hxtwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.129857 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4c64-account-create-update-q7s48" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.130195 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-kube-api-access-hgwqx" (OuterVolumeSpecName: "kube-api-access-hgwqx") pod "5e83a6fe-4aab-44f6-a5c3-1d2afd376278" (UID: "5e83a6fe-4aab-44f6-a5c3-1d2afd376278"). InnerVolumeSpecName "kube-api-access-hgwqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.137490 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.214448 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47bf79e7-dbca-4568-877d-82d13222755e-operator-scripts\") pod \"47bf79e7-dbca-4568-877d-82d13222755e\" (UID: \"47bf79e7-dbca-4568-877d-82d13222755e\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.215468 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcfn5\" (UniqueName: \"kubernetes.io/projected/47bf79e7-dbca-4568-877d-82d13222755e-kube-api-access-fcfn5\") pod \"47bf79e7-dbca-4568-877d-82d13222755e\" (UID: \"47bf79e7-dbca-4568-877d-82d13222755e\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.215507 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnb8z\" (UniqueName: \"kubernetes.io/projected/0a5ba781-e3a4-4458-918c-816f636b14bf-kube-api-access-jnb8z\") pod \"0a5ba781-e3a4-4458-918c-816f636b14bf\" (UID: \"0a5ba781-e3a4-4458-918c-816f636b14bf\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.215530 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a5ba781-e3a4-4458-918c-816f636b14bf-operator-scripts\") pod \"0a5ba781-e3a4-4458-918c-816f636b14bf\" (UID: \"0a5ba781-e3a4-4458-918c-816f636b14bf\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.216079 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.216094 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcs9r\" (UniqueName: \"kubernetes.io/projected/5de0053c-5a7e-4b59-a93a-90a9073cfa30-kube-api-access-kcs9r\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.216104 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.216112 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgwqx\" (UniqueName: \"kubernetes.io/projected/5e83a6fe-4aab-44f6-a5c3-1d2afd376278-kube-api-access-hgwqx\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.216120 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wg2z\" (UniqueName: \"kubernetes.io/projected/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4-kube-api-access-4wg2z\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.216128 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5q2m\" (UniqueName: \"kubernetes.io/projected/86e93d72-830a-4415-b23a-91c49115233f-kube-api-access-f5q2m\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.216137 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5de0053c-5a7e-4b59-a93a-90a9073cfa30-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.216145 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86e93d72-830a-4415-b23a-91c49115233f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.216153 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.216162 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxtwp\" (UniqueName: \"kubernetes.io/projected/af312aba-bce9-4e7c-a761-8ab57e0bb3e3-kube-api-access-hxtwp\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.215309 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47bf79e7-dbca-4568-877d-82d13222755e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47bf79e7-dbca-4568-877d-82d13222755e" (UID: "47bf79e7-dbca-4568-877d-82d13222755e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.217494 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a5ba781-e3a4-4458-918c-816f636b14bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a5ba781-e3a4-4458-918c-816f636b14bf" (UID: "0a5ba781-e3a4-4458-918c-816f636b14bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.221948 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5ba781-e3a4-4458-918c-816f636b14bf-kube-api-access-jnb8z" (OuterVolumeSpecName: "kube-api-access-jnb8z") pod "0a5ba781-e3a4-4458-918c-816f636b14bf" (UID: "0a5ba781-e3a4-4458-918c-816f636b14bf"). InnerVolumeSpecName "kube-api-access-jnb8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.222733 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47bf79e7-dbca-4568-877d-82d13222755e-kube-api-access-fcfn5" (OuterVolumeSpecName: "kube-api-access-fcfn5") pod "47bf79e7-dbca-4568-877d-82d13222755e" (UID: "47bf79e7-dbca-4568-877d-82d13222755e"). InnerVolumeSpecName "kube-api-access-fcfn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.316846 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-dns-svc\") pod \"613af722-ab75-4bff-a8b4-e7a19792c417\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.316901 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-config\") pod \"613af722-ab75-4bff-a8b4-e7a19792c417\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.317075 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxjfk\" (UniqueName: \"kubernetes.io/projected/81fbdceb-1c27-4507-8317-fa5b8e427716-kube-api-access-cxjfk\") pod \"81fbdceb-1c27-4507-8317-fa5b8e427716\" (UID: \"81fbdceb-1c27-4507-8317-fa5b8e427716\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.317111 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-nb\") pod \"613af722-ab75-4bff-a8b4-e7a19792c417\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.317131 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-sb\") pod \"613af722-ab75-4bff-a8b4-e7a19792c417\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.317157 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kqvv\" (UniqueName: \"kubernetes.io/projected/613af722-ab75-4bff-a8b4-e7a19792c417-kube-api-access-4kqvv\") pod \"613af722-ab75-4bff-a8b4-e7a19792c417\" (UID: \"613af722-ab75-4bff-a8b4-e7a19792c417\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.317187 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fbdceb-1c27-4507-8317-fa5b8e427716-operator-scripts\") pod \"81fbdceb-1c27-4507-8317-fa5b8e427716\" (UID: \"81fbdceb-1c27-4507-8317-fa5b8e427716\") " Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.317454 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcfn5\" (UniqueName: \"kubernetes.io/projected/47bf79e7-dbca-4568-877d-82d13222755e-kube-api-access-fcfn5\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.317473 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnb8z\" (UniqueName: \"kubernetes.io/projected/0a5ba781-e3a4-4458-918c-816f636b14bf-kube-api-access-jnb8z\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.317482 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a5ba781-e3a4-4458-918c-816f636b14bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.317491 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47bf79e7-dbca-4568-877d-82d13222755e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.318048 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81fbdceb-1c27-4507-8317-fa5b8e427716-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81fbdceb-1c27-4507-8317-fa5b8e427716" (UID: "81fbdceb-1c27-4507-8317-fa5b8e427716"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.321403 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/613af722-ab75-4bff-a8b4-e7a19792c417-kube-api-access-4kqvv" (OuterVolumeSpecName: "kube-api-access-4kqvv") pod "613af722-ab75-4bff-a8b4-e7a19792c417" (UID: "613af722-ab75-4bff-a8b4-e7a19792c417"). InnerVolumeSpecName "kube-api-access-4kqvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.321694 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81fbdceb-1c27-4507-8317-fa5b8e427716-kube-api-access-cxjfk" (OuterVolumeSpecName: "kube-api-access-cxjfk") pod "81fbdceb-1c27-4507-8317-fa5b8e427716" (UID: "81fbdceb-1c27-4507-8317-fa5b8e427716"). InnerVolumeSpecName "kube-api-access-cxjfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.355435 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "613af722-ab75-4bff-a8b4-e7a19792c417" (UID: "613af722-ab75-4bff-a8b4-e7a19792c417"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.356898 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-config" (OuterVolumeSpecName: "config") pod "613af722-ab75-4bff-a8b4-e7a19792c417" (UID: "613af722-ab75-4bff-a8b4-e7a19792c417"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.358351 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "613af722-ab75-4bff-a8b4-e7a19792c417" (UID: "613af722-ab75-4bff-a8b4-e7a19792c417"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.359875 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "613af722-ab75-4bff-a8b4-e7a19792c417" (UID: "613af722-ab75-4bff-a8b4-e7a19792c417"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.419402 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxjfk\" (UniqueName: \"kubernetes.io/projected/81fbdceb-1c27-4507-8317-fa5b8e427716-kube-api-access-cxjfk\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.420723 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.420953 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.421072 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kqvv\" (UniqueName: \"kubernetes.io/projected/613af722-ab75-4bff-a8b4-e7a19792c417-kube-api-access-4kqvv\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.421169 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81fbdceb-1c27-4507-8317-fa5b8e427716-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.421284 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.421391 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/613af722-ab75-4bff-a8b4-e7a19792c417-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.514837 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-glvsw" event={"ID":"5e83a6fe-4aab-44f6-a5c3-1d2afd376278","Type":"ContainerDied","Data":"24b893ef969143f494324ecdff20379ddf6da026afe93d1d2bd16f839a0e8b40"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.515089 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24b893ef969143f494324ecdff20379ddf6da026afe93d1d2bd16f839a0e8b40" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.515733 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-glvsw" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.519412 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2079-account-create-update-xtn8h" event={"ID":"e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4","Type":"ContainerDied","Data":"fde9db4f44c3f6ed5f7945a9140fdd6075f3041e8bc85b082020fc19e219f0da"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.519809 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fde9db4f44c3f6ed5f7945a9140fdd6075f3041e8bc85b082020fc19e219f0da" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.519582 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2079-account-create-update-xtn8h" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.524166 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-ae31-account-create-update-m7fmp" event={"ID":"af312aba-bce9-4e7c-a761-8ab57e0bb3e3","Type":"ContainerDied","Data":"6f506e393c5ccecb827ae0f87c13d704ff5ae5850b1f37a4f0983cef18739c69"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.524951 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f506e393c5ccecb827ae0f87c13d704ff5ae5850b1f37a4f0983cef18739c69" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.525294 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-ae31-account-create-update-m7fmp" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.526261 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-mhlkc" event={"ID":"0a5ba781-e3a4-4458-918c-816f636b14bf","Type":"ContainerDied","Data":"b2cd9a067d9e42f7cf20110dd73ebd20d29043c38d4b430f491b7b9d88c66a68"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.526291 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2cd9a067d9e42f7cf20110dd73ebd20d29043c38d4b430f491b7b9d88c66a68" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.526376 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-mhlkc" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.533454 4865 generic.go:334] "Generic (PLEG): container finished" podID="613af722-ab75-4bff-a8b4-e7a19792c417" containerID="2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa" exitCode=0 Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.533567 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.533577 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" event={"ID":"613af722-ab75-4bff-a8b4-e7a19792c417","Type":"ContainerDied","Data":"2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.533623 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d5cd98cbf-9gkpm" event={"ID":"613af722-ab75-4bff-a8b4-e7a19792c417","Type":"ContainerDied","Data":"dbb2a02a5aa2a3879147ace6f2f2f0713c0d6eb69bf883d10a357e3e98912600"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.533643 4865 scope.go:117] "RemoveContainer" containerID="2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.535460 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-l2bdc" event={"ID":"5de0053c-5a7e-4b59-a93a-90a9073cfa30","Type":"ContainerDied","Data":"55729e550fd00df13207763f9fe480af536d2aa0227fa53db2c11cb113e68220"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.535486 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55729e550fd00df13207763f9fe480af536d2aa0227fa53db2c11cb113e68220" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.535548 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-l2bdc" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.546960 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-bg7bm" event={"ID":"86e93d72-830a-4415-b23a-91c49115233f","Type":"ContainerDied","Data":"fdcd24b076483b7bed553aed30242377dbc8518c1620b967092dfeb984d9e365"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.547130 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdcd24b076483b7bed553aed30242377dbc8518c1620b967092dfeb984d9e365" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.547302 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-bg7bm" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.555966 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-g94f9" event={"ID":"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6","Type":"ContainerStarted","Data":"84bbd32f5e5592885910db180a5c03c622605469887843c4481ac3d99561a1f9"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.559147 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4c64-account-create-update-q7s48" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.559149 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4c64-account-create-update-q7s48" event={"ID":"81fbdceb-1c27-4507-8317-fa5b8e427716","Type":"ContainerDied","Data":"1bf915da2895eb914ce7c75a0d6418eeeb7bec68c6891d531acfdb9082ce0b28"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.559443 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bf915da2895eb914ce7c75a0d6418eeeb7bec68c6891d531acfdb9082ce0b28" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.574844 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-27ee-account-create-update-twq6z" event={"ID":"47bf79e7-dbca-4568-877d-82d13222755e","Type":"ContainerDied","Data":"e05c89f306a4f86dba801af0e9d35b9ac4e2f2196740530eb93134ebf16db978"} Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.574882 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-27ee-account-create-update-twq6z" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.574856 4865 scope.go:117] "RemoveContainer" containerID="8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.574885 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e05c89f306a4f86dba801af0e9d35b9ac4e2f2196740530eb93134ebf16db978" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.575125 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-g94f9" podStartSLOduration=3.646388567 podStartE2EDuration="8.575117199s" podCreationTimestamp="2026-01-23 12:10:51 +0000 UTC" firstStartedPulling="2026-01-23 12:10:53.839152865 +0000 UTC m=+1098.008225091" lastFinishedPulling="2026-01-23 12:10:58.767881497 +0000 UTC m=+1102.936953723" observedRunningTime="2026-01-23 12:10:59.571206453 +0000 UTC m=+1103.740278679" watchObservedRunningTime="2026-01-23 12:10:59.575117199 +0000 UTC m=+1103.744189425" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.623592 4865 scope.go:117] "RemoveContainer" containerID="2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa" Jan 23 12:10:59 crc kubenswrapper[4865]: E0123 12:10:59.624704 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa\": container with ID starting with 2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa not found: ID does not exist" containerID="2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.624813 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa"} err="failed to get container status \"2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa\": rpc error: code = NotFound desc = could not find container \"2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa\": container with ID starting with 2c8042bd8f4d738e45f7eb0878f8e279b2a79535e7101e29bbed1e2acc9042fa not found: ID does not exist" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.624951 4865 scope.go:117] "RemoveContainer" containerID="8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d" Jan 23 12:10:59 crc kubenswrapper[4865]: E0123 12:10:59.625581 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d\": container with ID starting with 8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d not found: ID does not exist" containerID="8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.625725 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d"} err="failed to get container status \"8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d\": rpc error: code = NotFound desc = could not find container \"8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d\": container with ID starting with 8cde73ef656a69a185fb223b434b1ae44e95a5a44dcb034019be0bfcf1e7108d not found: ID does not exist" Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.660775 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d5cd98cbf-9gkpm"] Jan 23 12:10:59 crc kubenswrapper[4865]: I0123 12:10:59.669682 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d5cd98cbf-9gkpm"] Jan 23 12:11:00 crc kubenswrapper[4865]: I0123 12:11:00.129055 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="613af722-ab75-4bff-a8b4-e7a19792c417" path="/var/lib/kubelet/pods/613af722-ab75-4bff-a8b4-e7a19792c417/volumes" Jan 23 12:11:02 crc kubenswrapper[4865]: I0123 12:11:02.608158 4865 generic.go:334] "Generic (PLEG): container finished" podID="d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6" containerID="84bbd32f5e5592885910db180a5c03c622605469887843c4481ac3d99561a1f9" exitCode=0 Jan 23 12:11:02 crc kubenswrapper[4865]: I0123 12:11:02.608246 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-g94f9" event={"ID":"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6","Type":"ContainerDied","Data":"84bbd32f5e5592885910db180a5c03c622605469887843c4481ac3d99561a1f9"} Jan 23 12:11:03 crc kubenswrapper[4865]: I0123 12:11:03.908947 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-g94f9" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.089265 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-combined-ca-bundle\") pod \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.089346 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gbcn\" (UniqueName: \"kubernetes.io/projected/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-kube-api-access-9gbcn\") pod \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.089450 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-config-data\") pod \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\" (UID: \"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6\") " Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.095145 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-kube-api-access-9gbcn" (OuterVolumeSpecName: "kube-api-access-9gbcn") pod "d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6" (UID: "d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6"). InnerVolumeSpecName "kube-api-access-9gbcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.122336 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6" (UID: "d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.158838 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-config-data" (OuterVolumeSpecName: "config-data") pod "d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6" (UID: "d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.190929 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.191067 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.191199 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gbcn\" (UniqueName: \"kubernetes.io/projected/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6-kube-api-access-9gbcn\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.626381 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-g94f9" event={"ID":"d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6","Type":"ContainerDied","Data":"946be756f589eb4c05d2d2fdee1668a2e11f6f5d3cdfbdef4c6c40caf8bc010a"} Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.626689 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="946be756f589eb4c05d2d2fdee1668a2e11f6f5d3cdfbdef4c6c40caf8bc010a" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.626438 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-g94f9" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.996358 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68467b6d7-q5r6r"] Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.996859 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613af722-ab75-4bff-a8b4-e7a19792c417" containerName="dnsmasq-dns" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.996875 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="613af722-ab75-4bff-a8b4-e7a19792c417" containerName="dnsmasq-dns" Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.996888 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.996897 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.996945 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81fbdceb-1c27-4507-8317-fa5b8e427716" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.996955 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="81fbdceb-1c27-4507-8317-fa5b8e427716" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.996965 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47bf79e7-dbca-4568-877d-82d13222755e" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.996973 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="47bf79e7-dbca-4568-877d-82d13222755e" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.996983 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86e93d72-830a-4415-b23a-91c49115233f" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.996991 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="86e93d72-830a-4415-b23a-91c49115233f" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.997004 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e83a6fe-4aab-44f6-a5c3-1d2afd376278" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997012 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e83a6fe-4aab-44f6-a5c3-1d2afd376278" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.997029 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af312aba-bce9-4e7c-a761-8ab57e0bb3e3" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997037 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="af312aba-bce9-4e7c-a761-8ab57e0bb3e3" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.997056 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de0053c-5a7e-4b59-a93a-90a9073cfa30" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997064 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de0053c-5a7e-4b59-a93a-90a9073cfa30" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.997080 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6" containerName="keystone-db-sync" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997089 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6" containerName="keystone-db-sync" Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.997101 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="613af722-ab75-4bff-a8b4-e7a19792c417" containerName="init" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997109 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="613af722-ab75-4bff-a8b4-e7a19792c417" containerName="init" Jan 23 12:11:04 crc kubenswrapper[4865]: E0123 12:11:04.997132 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5ba781-e3a4-4458-918c-816f636b14bf" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997140 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5ba781-e3a4-4458-918c-816f636b14bf" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997335 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e83a6fe-4aab-44f6-a5c3-1d2afd376278" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997349 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="613af722-ab75-4bff-a8b4-e7a19792c417" containerName="dnsmasq-dns" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997357 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6" containerName="keystone-db-sync" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997369 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5ba781-e3a4-4458-918c-816f636b14bf" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997381 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="47bf79e7-dbca-4568-877d-82d13222755e" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997395 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="81fbdceb-1c27-4507-8317-fa5b8e427716" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997407 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de0053c-5a7e-4b59-a93a-90a9073cfa30" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997418 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="86e93d72-830a-4415-b23a-91c49115233f" containerName="mariadb-database-create" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997429 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="af312aba-bce9-4e7c-a761-8ab57e0bb3e3" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.997444 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4" containerName="mariadb-account-create-update" Jan 23 12:11:04 crc kubenswrapper[4865]: I0123 12:11:04.998538 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.004183 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68467b6d7-q5r6r"] Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.106826 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ntg2v"] Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.116358 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.123478 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.123721 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.123882 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.124212 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.124397 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nlns" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153023 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-config-data\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153082 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-config\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153115 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-sb\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153143 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-swift-storage-0\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153178 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfndv\" (UniqueName: \"kubernetes.io/projected/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-kube-api-access-jfndv\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153208 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-credential-keys\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153223 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-combined-ca-bundle\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-fernet-keys\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153270 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8v8\" (UniqueName: \"kubernetes.io/projected/0b310659-1057-49d4-9016-572d3f4b031e-kube-api-access-sb8v8\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153288 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-nb\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153306 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-scripts\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.153368 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-svc\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.156207 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ntg2v"] Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.257793 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-config-data\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.257861 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-config\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.257896 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-sb\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.257938 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-swift-storage-0\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.257994 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfndv\" (UniqueName: \"kubernetes.io/projected/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-kube-api-access-jfndv\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.258042 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-credential-keys\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.258065 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-combined-ca-bundle\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.258089 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-fernet-keys\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.258133 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb8v8\" (UniqueName: \"kubernetes.io/projected/0b310659-1057-49d4-9016-572d3f4b031e-kube-api-access-sb8v8\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.258159 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-nb\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.258183 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-scripts\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.258245 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-svc\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.259342 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-svc\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.260114 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-config\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.260794 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-sb\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.262709 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-nb\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.265045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-config-data\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.265199 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-swift-storage-0\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.269585 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-fernet-keys\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.269981 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-scripts\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.270959 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-combined-ca-bundle\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.273094 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-credential-keys\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.302838 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb8v8\" (UniqueName: \"kubernetes.io/projected/0b310659-1057-49d4-9016-572d3f4b031e-kube-api-access-sb8v8\") pod \"keystone-bootstrap-ntg2v\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.328380 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfndv\" (UniqueName: \"kubernetes.io/projected/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-kube-api-access-jfndv\") pod \"dnsmasq-dns-68467b6d7-q5r6r\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.441231 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.568531 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xqdv2"] Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.569870 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.576852 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-b4zk8" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.577130 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.590248 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xqdv2"] Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.623962 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.665187 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-db-sync-config-data\") pod \"barbican-db-sync-xqdv2\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.665242 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-combined-ca-bundle\") pod \"barbican-db-sync-xqdv2\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.665263 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-277q4\" (UniqueName: \"kubernetes.io/projected/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-kube-api-access-277q4\") pod \"barbican-db-sync-xqdv2\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.767060 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-db-sync-config-data\") pod \"barbican-db-sync-xqdv2\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.767116 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-combined-ca-bundle\") pod \"barbican-db-sync-xqdv2\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.767139 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-277q4\" (UniqueName: \"kubernetes.io/projected/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-kube-api-access-277q4\") pod \"barbican-db-sync-xqdv2\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.773319 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-combined-ca-bundle\") pod \"barbican-db-sync-xqdv2\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.776187 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-db-sync-config-data\") pod \"barbican-db-sync-xqdv2\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.793299 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-277q4\" (UniqueName: \"kubernetes.io/projected/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-kube-api-access-277q4\") pod \"barbican-db-sync-xqdv2\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.856158 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-sxqmn"] Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.857193 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.866510 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.866710 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.866825 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-f7zrz" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.868476 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrpg\" (UniqueName: \"kubernetes.io/projected/afab83a5-8e47-4531-80de-ae69dfd11bd9-kube-api-access-lnrpg\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.868528 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-db-sync-config-data\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.868546 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-config-data\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.868585 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-scripts\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.868622 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-combined-ca-bundle\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.868656 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/afab83a5-8e47-4531-80de-ae69dfd11bd9-etc-machine-id\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.885640 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-sxqmn"] Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.898762 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.919459 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jw9z7"] Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.920593 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.923263 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gqlj9" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.925892 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.974012 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/afab83a5-8e47-4531-80de-ae69dfd11bd9-etc-machine-id\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.974259 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/afab83a5-8e47-4531-80de-ae69dfd11bd9-etc-machine-id\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.974428 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrpg\" (UniqueName: \"kubernetes.io/projected/afab83a5-8e47-4531-80de-ae69dfd11bd9-kube-api-access-lnrpg\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.974493 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-db-sync-config-data\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.974516 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-config-data\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.974583 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-scripts\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.974631 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-combined-ca-bundle\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:05 crc kubenswrapper[4865]: I0123 12:11:05.992268 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-db-sync-config-data\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.005669 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jw9z7"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.018316 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-scripts\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.021522 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-combined-ca-bundle\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.021752 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-config-data\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.022891 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrpg\" (UniqueName: \"kubernetes.io/projected/afab83a5-8e47-4531-80de-ae69dfd11bd9-kube-api-access-lnrpg\") pod \"cinder-db-sync-sxqmn\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.075948 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltjwl\" (UniqueName: \"kubernetes.io/projected/3e6117d5-9df1-4299-8358-d7235d7847d2-kube-api-access-ltjwl\") pod \"heat-db-sync-jw9z7\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.076026 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-combined-ca-bundle\") pod \"heat-db-sync-jw9z7\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.076088 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-config-data\") pod \"heat-db-sync-jw9z7\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.083380 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ntg2v"] Jan 23 12:11:06 crc kubenswrapper[4865]: W0123 12:11:06.083399 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b310659_1057_49d4_9016_572d3f4b031e.slice/crio-b43f81dc035aefb390c090e487eff821dab8cf633c2c0d8f2d876377afba4e56 WatchSource:0}: Error finding container b43f81dc035aefb390c090e487eff821dab8cf633c2c0d8f2d876377afba4e56: Status 404 returned error can't find the container with id b43f81dc035aefb390c090e487eff821dab8cf633c2c0d8f2d876377afba4e56 Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.175333 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.182263 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-combined-ca-bundle\") pod \"heat-db-sync-jw9z7\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.182393 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-config-data\") pod \"heat-db-sync-jw9z7\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.182449 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltjwl\" (UniqueName: \"kubernetes.io/projected/3e6117d5-9df1-4299-8358-d7235d7847d2-kube-api-access-ltjwl\") pod \"heat-db-sync-jw9z7\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.203242 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-combined-ca-bundle\") pod \"heat-db-sync-jw9z7\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.210660 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68467b6d7-q5r6r"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.220466 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltjwl\" (UniqueName: \"kubernetes.io/projected/3e6117d5-9df1-4299-8358-d7235d7847d2-kube-api-access-ltjwl\") pod \"heat-db-sync-jw9z7\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.221538 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-config-data\") pod \"heat-db-sync-jw9z7\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.246571 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-scpxv"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.247534 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.255269 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.255462 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jh6tv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.255572 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.285310 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jw9z7" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.344080 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d47ff97-wl986"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.345942 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.355909 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.356833 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-674c76ff67-kjrj6"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.358443 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.359077 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2mrfh" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.359257 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.359387 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.383748 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zntkp"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.384846 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.391363 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84103afc-63d9-416c-bc51-729cd8c6eeed-logs\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.391448 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-combined-ca-bundle\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.391503 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-scripts\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.391557 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qvx\" (UniqueName: \"kubernetes.io/projected/84103afc-63d9-416c-bc51-729cd8c6eeed-kube-api-access-27qvx\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.391668 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-config-data\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.399740 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674c76ff67-kjrj6"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.422267 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qjk5k" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.422555 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.433004 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.477639 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d47ff97-wl986"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.589389 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27qvx\" (UniqueName: \"kubernetes.io/projected/84103afc-63d9-416c-bc51-729cd8c6eeed-kube-api-access-27qvx\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.589506 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-swift-storage-0\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.601578 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74713871-78be-40f9-97f5-13282a5bfe9e-horizon-secret-key\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.601894 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-sb\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.601981 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-config-data\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.602066 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrt9c\" (UniqueName: \"kubernetes.io/projected/c07bee20-b47c-4881-87bc-adba361cd25a-kube-api-access-mrt9c\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.602266 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74713871-78be-40f9-97f5-13282a5bfe9e-logs\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.602383 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-svc\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.602526 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghtcd\" (UniqueName: \"kubernetes.io/projected/0155ddd7-e729-44e5-b3c9-e18d88d171ef-kube-api-access-ghtcd\") pod \"neutron-db-sync-zntkp\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.602680 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84103afc-63d9-416c-bc51-729cd8c6eeed-logs\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.602884 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-combined-ca-bundle\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.602952 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f5cl\" (UniqueName: \"kubernetes.io/projected/74713871-78be-40f9-97f5-13282a5bfe9e-kube-api-access-9f5cl\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.616109 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-config\") pod \"neutron-db-sync-zntkp\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.616169 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-nb\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.616229 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-config\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.616248 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-combined-ca-bundle\") pod \"neutron-db-sync-zntkp\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.616286 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-config-data\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.616308 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-scripts\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.616348 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-scripts\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.641559 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84103afc-63d9-416c-bc51-729cd8c6eeed-logs\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.672561 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68467b6d7-q5r6r"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.681923 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27qvx\" (UniqueName: \"kubernetes.io/projected/84103afc-63d9-416c-bc51-729cd8c6eeed-kube-api-access-27qvx\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.682435 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-config-data\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.683362 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-combined-ca-bundle\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.685663 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" event={"ID":"27b740f7-64fd-4ee7-abe7-f87b66f0b12a","Type":"ContainerStarted","Data":"8aa7671c39218e8714ce9ecbe5605ebdb62460b94e925027f1a3c30d2ec8a060"} Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.713359 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntg2v" event={"ID":"0b310659-1057-49d4-9016-572d3f4b031e","Type":"ContainerStarted","Data":"b43f81dc035aefb390c090e487eff821dab8cf633c2c0d8f2d876377afba4e56"} Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742295 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghtcd\" (UniqueName: \"kubernetes.io/projected/0155ddd7-e729-44e5-b3c9-e18d88d171ef-kube-api-access-ghtcd\") pod \"neutron-db-sync-zntkp\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742392 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f5cl\" (UniqueName: \"kubernetes.io/projected/74713871-78be-40f9-97f5-13282a5bfe9e-kube-api-access-9f5cl\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742421 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-config\") pod \"neutron-db-sync-zntkp\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742449 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-nb\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742493 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-config\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742512 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-combined-ca-bundle\") pod \"neutron-db-sync-zntkp\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742527 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-config-data\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742549 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-scripts\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742616 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-swift-storage-0\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742638 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74713871-78be-40f9-97f5-13282a5bfe9e-horizon-secret-key\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742673 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-sb\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742695 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrt9c\" (UniqueName: \"kubernetes.io/projected/c07bee20-b47c-4881-87bc-adba361cd25a-kube-api-access-mrt9c\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742728 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74713871-78be-40f9-97f5-13282a5bfe9e-logs\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.742763 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-svc\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.744164 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-svc\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.746560 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-scripts\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.747202 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-nb\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.747724 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-config\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.748683 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-config\") pod \"neutron-db-sync-zntkp\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.749341 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-swift-storage-0\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.754060 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74713871-78be-40f9-97f5-13282a5bfe9e-horizon-secret-key\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.754731 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-sb\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.755282 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74713871-78be-40f9-97f5-13282a5bfe9e-logs\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.757360 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-combined-ca-bundle\") pod \"neutron-db-sync-zntkp\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.757442 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-config-data\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.780547 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-scpxv"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.792009 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-scripts\") pod \"placement-db-sync-scpxv\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.803086 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghtcd\" (UniqueName: \"kubernetes.io/projected/0155ddd7-e729-44e5-b3c9-e18d88d171ef-kube-api-access-ghtcd\") pod \"neutron-db-sync-zntkp\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.810547 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrt9c\" (UniqueName: \"kubernetes.io/projected/c07bee20-b47c-4881-87bc-adba361cd25a-kube-api-access-mrt9c\") pod \"dnsmasq-dns-674c76ff67-kjrj6\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.838258 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f5cl\" (UniqueName: \"kubernetes.io/projected/74713871-78be-40f9-97f5-13282a5bfe9e-kube-api-access-9f5cl\") pod \"horizon-6d47ff97-wl986\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.863728 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zntkp"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.869825 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.916053 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.924401 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.942668 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.942788 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.946744 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.973016 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.977782 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.983180 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.983421 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4dh5h" Jan 23 12:11:06 crc kubenswrapper[4865]: I0123 12:11:06.983573 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:06.994559 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:06.994559 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.009963 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.015539 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7999dc6947-8xp26"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.017003 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.029420 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.035000 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.036510 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.055099 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.055287 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056264 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056294 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056313 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056333 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-run-httpd\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056353 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056372 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056404 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-config-data\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056427 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nxpt\" (UniqueName: \"kubernetes.io/projected/8d804805-bf09-488a-80cc-ddda0ba1d466-kube-api-access-9nxpt\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056443 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-logs\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056462 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056477 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-scripts\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056509 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg7xw\" (UniqueName: \"kubernetes.io/projected/cf2348b4-7647-4711-83a3-e4dbb907b9b6-kube-api-access-pg7xw\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056563 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-log-httpd\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056581 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.056629 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.062360 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7999dc6947-8xp26"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.074854 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.075466 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zntkp" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.159639 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.159693 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-config-data\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167002 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-log-httpd\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167038 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167083 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167118 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167142 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167162 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-scripts\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167192 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167216 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-run-httpd\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167253 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167289 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167338 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-scripts\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167385 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-config-data\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167406 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a0ac97-b923-44ce-8b90-31965497a560-logs\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167425 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-config-data\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167459 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nxpt\" (UniqueName: \"kubernetes.io/projected/8d804805-bf09-488a-80cc-ddda0ba1d466-kube-api-access-9nxpt\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167504 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-logs\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167685 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167738 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167755 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-scripts\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167807 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n226c\" (UniqueName: \"kubernetes.io/projected/f0a0ac97-b923-44ce-8b90-31965497a560-kube-api-access-n226c\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167838 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0a0ac97-b923-44ce-8b90-31965497a560-horizon-secret-key\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167881 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg7xw\" (UniqueName: \"kubernetes.io/projected/cf2348b4-7647-4711-83a3-e4dbb907b9b6-kube-api-access-pg7xw\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167908 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-logs\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167950 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167968 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pwk7\" (UniqueName: \"kubernetes.io/projected/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-kube-api-access-7pwk7\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.167987 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.169100 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-log-httpd\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.176881 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-logs\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.177334 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-run-httpd\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.177579 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.195152 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.199336 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-scripts\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.204399 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.221819 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.222969 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg7xw\" (UniqueName: \"kubernetes.io/projected/cf2348b4-7647-4711-83a3-e4dbb907b9b6-kube-api-access-pg7xw\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.224303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-config-data\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.224479 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.225369 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.232281 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nxpt\" (UniqueName: \"kubernetes.io/projected/8d804805-bf09-488a-80cc-ddda0ba1d466-kube-api-access-9nxpt\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.236285 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.248112 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.269172 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.269230 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n226c\" (UniqueName: \"kubernetes.io/projected/f0a0ac97-b923-44ce-8b90-31965497a560-kube-api-access-n226c\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.269252 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0a0ac97-b923-44ce-8b90-31965497a560-horizon-secret-key\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.269279 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-logs\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.271548 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.272834 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.272869 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pwk7\" (UniqueName: \"kubernetes.io/projected/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-kube-api-access-7pwk7\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.272931 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.272994 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.273019 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-config-data\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.273116 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-scripts\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.273215 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-scripts\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.273255 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a0ac97-b923-44ce-8b90-31965497a560-logs\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.273278 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-config-data\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.274071 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-logs\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.274621 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-config-data\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.275739 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.277428 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-scripts\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.277715 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0a0ac97-b923-44ce-8b90-31965497a560-horizon-secret-key\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.278321 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a0ac97-b923-44ce-8b90-31965497a560-logs\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.279052 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.284528 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.284913 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-config-data\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.285372 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.289960 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xqdv2"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.308233 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-scripts\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.344245 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n226c\" (UniqueName: \"kubernetes.io/projected/f0a0ac97-b923-44ce-8b90-31965497a560-kube-api-access-n226c\") pod \"horizon-7999dc6947-8xp26\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.356388 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pwk7\" (UniqueName: \"kubernetes.io/projected/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-kube-api-access-7pwk7\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.377785 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-sxqmn"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.403943 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.442851 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:07 crc kubenswrapper[4865]: W0123 12:11:07.494268 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafab83a5_8e47_4531_80de_ae69dfd11bd9.slice/crio-dbd9cde13a27ec909f15afc4f801d1138105392ad52f2d7c09e6335a038dc6b0 WatchSource:0}: Error finding container dbd9cde13a27ec909f15afc4f801d1138105392ad52f2d7c09e6335a038dc6b0: Status 404 returned error can't find the container with id dbd9cde13a27ec909f15afc4f801d1138105392ad52f2d7c09e6335a038dc6b0 Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.505460 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.699497 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.702226 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jw9z7"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.732189 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntg2v" event={"ID":"0b310659-1057-49d4-9016-572d3f4b031e","Type":"ContainerStarted","Data":"96b2d55ead926d291c13e7e950bbbbb89fd4d945bceaa2ae8877643952a6aa29"} Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.736199 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sxqmn" event={"ID":"afab83a5-8e47-4531-80de-ae69dfd11bd9","Type":"ContainerStarted","Data":"dbd9cde13a27ec909f15afc4f801d1138105392ad52f2d7c09e6335a038dc6b0"} Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.741140 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xqdv2" event={"ID":"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90","Type":"ContainerStarted","Data":"d220761f2954b3afa39f9fe3d71a1a8b05dfe0118c7d34c8b347174851fe76c9"} Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.742258 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" event={"ID":"27b740f7-64fd-4ee7-abe7-f87b66f0b12a","Type":"ContainerStarted","Data":"242063dd4a7818cda37a87763a4bf9e758e480dad58f9bc2a67fdfdef373c5bf"} Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.763171 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ntg2v" podStartSLOduration=2.763149275 podStartE2EDuration="2.763149275s" podCreationTimestamp="2026-01-23 12:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:07.760945591 +0000 UTC m=+1111.930017817" watchObservedRunningTime="2026-01-23 12:11:07.763149275 +0000 UTC m=+1111.932221501" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.765716 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.839235 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d47ff97-wl986"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.871576 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.924192 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-585f76ff69-w25jz"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.925667 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.959377 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-585f76ff69-w25jz"] Jan 23 12:11:07 crc kubenswrapper[4865]: I0123 12:11:07.974166 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-scpxv"] Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.006880 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.028491 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-config-data\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.028545 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/99029342-24ed-451c-8f14-9deb0cfc16f0-horizon-secret-key\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.028677 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-scripts\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.028700 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhfll\" (UniqueName: \"kubernetes.io/projected/99029342-24ed-451c-8f14-9deb0cfc16f0-kube-api-access-nhfll\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.028718 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99029342-24ed-451c-8f14-9deb0cfc16f0-logs\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.104463 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.129894 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99029342-24ed-451c-8f14-9deb0cfc16f0-logs\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.129951 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-config-data\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.129975 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/99029342-24ed-451c-8f14-9deb0cfc16f0-horizon-secret-key\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.130078 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-scripts\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.130105 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhfll\" (UniqueName: \"kubernetes.io/projected/99029342-24ed-451c-8f14-9deb0cfc16f0-kube-api-access-nhfll\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.133311 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99029342-24ed-451c-8f14-9deb0cfc16f0-logs\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.134735 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-scripts\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.135688 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-config-data\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.146236 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/99029342-24ed-451c-8f14-9deb0cfc16f0-horizon-secret-key\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.162790 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhfll\" (UniqueName: \"kubernetes.io/projected/99029342-24ed-451c-8f14-9deb0cfc16f0-kube-api-access-nhfll\") pod \"horizon-585f76ff69-w25jz\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.209979 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674c76ff67-kjrj6"] Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.261823 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.418988 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zntkp"] Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.456758 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.462347 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d47ff97-wl986"] Jan 23 12:11:08 crc kubenswrapper[4865]: W0123 12:11:08.492378 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0155ddd7_e729_44e5_b3c9_e18d88d171ef.slice/crio-1b24ae4757e3bfd0f960a1c1beaaafac8ebc911c06831808e506c463d5bd1726 WatchSource:0}: Error finding container 1b24ae4757e3bfd0f960a1c1beaaafac8ebc911c06831808e506c463d5bd1726: Status 404 returned error can't find the container with id 1b24ae4757e3bfd0f960a1c1beaaafac8ebc911c06831808e506c463d5bd1726 Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.658152 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7999dc6947-8xp26"] Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.760738 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-scpxv" event={"ID":"84103afc-63d9-416c-bc51-729cd8c6eeed","Type":"ContainerStarted","Data":"5aa165ea5b3fe8de04952ef8b08ee2e0094ef6c6e77bb3514d04e9d01549bf1f"} Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.763731 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jw9z7" event={"ID":"3e6117d5-9df1-4299-8358-d7235d7847d2","Type":"ContainerStarted","Data":"8dece95a715de16ee0a64a0c418413bb402765396e9f69795de1590181a1d1d1"} Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.771309 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" event={"ID":"c07bee20-b47c-4881-87bc-adba361cd25a","Type":"ContainerStarted","Data":"98e50244e01caccc17e5e49547f5768dddfdc15309e5ef71dca1ae6eff4e5c67"} Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.777041 4865 generic.go:334] "Generic (PLEG): container finished" podID="27b740f7-64fd-4ee7-abe7-f87b66f0b12a" containerID="242063dd4a7818cda37a87763a4bf9e758e480dad58f9bc2a67fdfdef373c5bf" exitCode=0 Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.777099 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" event={"ID":"27b740f7-64fd-4ee7-abe7-f87b66f0b12a","Type":"ContainerDied","Data":"242063dd4a7818cda37a87763a4bf9e758e480dad58f9bc2a67fdfdef373c5bf"} Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.789382 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d47ff97-wl986" event={"ID":"74713871-78be-40f9-97f5-13282a5bfe9e","Type":"ContainerStarted","Data":"b9c9ffe925db7abc9601dc90e1ed303f0ee7b244d5bf0b04c6d43aeb416fc3e8"} Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.821519 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d804805-bf09-488a-80cc-ddda0ba1d466","Type":"ContainerStarted","Data":"b56e15a5525bc76c8254f2169753184c29f20523ea7f88ee05b08dfa585dde37"} Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.822886 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zntkp" event={"ID":"0155ddd7-e729-44e5-b3c9-e18d88d171ef","Type":"ContainerStarted","Data":"1b24ae4757e3bfd0f960a1c1beaaafac8ebc911c06831808e506c463d5bd1726"} Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.825640 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7999dc6947-8xp26" event={"ID":"f0a0ac97-b923-44ce-8b90-31965497a560","Type":"ContainerStarted","Data":"1975e027324190fde92500f0ff7bf058df8fdb14f8a86b92571140f14f1d8e9e"} Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.855520 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zntkp" podStartSLOduration=2.8555020410000003 podStartE2EDuration="2.855502041s" podCreationTimestamp="2026-01-23 12:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:08.853051971 +0000 UTC m=+1113.022124197" watchObservedRunningTime="2026-01-23 12:11:08.855502041 +0000 UTC m=+1113.024574267" Jan 23 12:11:08 crc kubenswrapper[4865]: I0123 12:11:08.976330 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:11:08 crc kubenswrapper[4865]: W0123 12:11:08.988355 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf2348b4_7647_4711_83a3_e4dbb907b9b6.slice/crio-f2cee00b8dee561838979a07c60826768a567a771de1909410ad6ff3fcb4942c WatchSource:0}: Error finding container f2cee00b8dee561838979a07c60826768a567a771de1909410ad6ff3fcb4942c: Status 404 returned error can't find the container with id f2cee00b8dee561838979a07c60826768a567a771de1909410ad6ff3fcb4942c Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.111992 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.161438 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-585f76ff69-w25jz"] Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.556418 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.689471 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-svc\") pod \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.689543 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-sb\") pod \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.689594 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-nb\") pod \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.689655 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-swift-storage-0\") pod \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.689726 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfndv\" (UniqueName: \"kubernetes.io/projected/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-kube-api-access-jfndv\") pod \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.689841 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-config\") pod \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\" (UID: \"27b740f7-64fd-4ee7-abe7-f87b66f0b12a\") " Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.721009 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-kube-api-access-jfndv" (OuterVolumeSpecName: "kube-api-access-jfndv") pod "27b740f7-64fd-4ee7-abe7-f87b66f0b12a" (UID: "27b740f7-64fd-4ee7-abe7-f87b66f0b12a"). InnerVolumeSpecName "kube-api-access-jfndv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.744482 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27b740f7-64fd-4ee7-abe7-f87b66f0b12a" (UID: "27b740f7-64fd-4ee7-abe7-f87b66f0b12a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.748665 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27b740f7-64fd-4ee7-abe7-f87b66f0b12a" (UID: "27b740f7-64fd-4ee7-abe7-f87b66f0b12a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.792416 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfndv\" (UniqueName: \"kubernetes.io/projected/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-kube-api-access-jfndv\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.792458 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.792472 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.816664 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27b740f7-64fd-4ee7-abe7-f87b66f0b12a" (UID: "27b740f7-64fd-4ee7-abe7-f87b66f0b12a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.824176 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-config" (OuterVolumeSpecName: "config") pod "27b740f7-64fd-4ee7-abe7-f87b66f0b12a" (UID: "27b740f7-64fd-4ee7-abe7-f87b66f0b12a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.832048 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "27b740f7-64fd-4ee7-abe7-f87b66f0b12a" (UID: "27b740f7-64fd-4ee7-abe7-f87b66f0b12a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.885271 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zntkp" event={"ID":"0155ddd7-e729-44e5-b3c9-e18d88d171ef","Type":"ContainerStarted","Data":"f1d6e4b57940fae2b9d0686009b1bfb552702fbf15d15ea19a28e777ae03b388"} Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.888861 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" event={"ID":"27b740f7-64fd-4ee7-abe7-f87b66f0b12a","Type":"ContainerDied","Data":"8aa7671c39218e8714ce9ecbe5605ebdb62460b94e925027f1a3c30d2ec8a060"} Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.888910 4865 scope.go:117] "RemoveContainer" containerID="242063dd4a7818cda37a87763a4bf9e758e480dad58f9bc2a67fdfdef373c5bf" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.889215 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68467b6d7-q5r6r" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.892320 4865 generic.go:334] "Generic (PLEG): container finished" podID="c07bee20-b47c-4881-87bc-adba361cd25a" containerID="c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed" exitCode=0 Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.892408 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" event={"ID":"c07bee20-b47c-4881-87bc-adba361cd25a","Type":"ContainerDied","Data":"c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed"} Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.897271 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.897319 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.897337 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27b740f7-64fd-4ee7-abe7-f87b66f0b12a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:09 crc kubenswrapper[4865]: I0123 12:11:09.977558 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf2348b4-7647-4711-83a3-e4dbb907b9b6","Type":"ContainerStarted","Data":"f2cee00b8dee561838979a07c60826768a567a771de1909410ad6ff3fcb4942c"} Jan 23 12:11:10 crc kubenswrapper[4865]: I0123 12:11:10.020694 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-585f76ff69-w25jz" event={"ID":"99029342-24ed-451c-8f14-9deb0cfc16f0","Type":"ContainerStarted","Data":"ea783795babf6997d935a05a05c87c5b6a5557872d1b2565d481d3a808962c4d"} Jan 23 12:11:10 crc kubenswrapper[4865]: I0123 12:11:10.023966 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68467b6d7-q5r6r"] Jan 23 12:11:10 crc kubenswrapper[4865]: I0123 12:11:10.048333 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68467b6d7-q5r6r"] Jan 23 12:11:10 crc kubenswrapper[4865]: I0123 12:11:10.065527 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93","Type":"ContainerStarted","Data":"ae09c6bc1ea032bc2c35415d6e91693ea3cc2596f033fff784024cea79c2c79d"} Jan 23 12:11:10 crc kubenswrapper[4865]: I0123 12:11:10.166494 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b740f7-64fd-4ee7-abe7-f87b66f0b12a" path="/var/lib/kubelet/pods/27b740f7-64fd-4ee7-abe7-f87b66f0b12a/volumes" Jan 23 12:11:11 crc kubenswrapper[4865]: I0123 12:11:11.119960 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" event={"ID":"c07bee20-b47c-4881-87bc-adba361cd25a","Type":"ContainerStarted","Data":"b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017"} Jan 23 12:11:11 crc kubenswrapper[4865]: I0123 12:11:11.120322 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:11 crc kubenswrapper[4865]: I0123 12:11:11.122556 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf2348b4-7647-4711-83a3-e4dbb907b9b6","Type":"ContainerStarted","Data":"4c4d9b9292e80bb7a6bed2996c395d58761d92d2b21aa8b14a8c488a93a553db"} Jan 23 12:11:11 crc kubenswrapper[4865]: I0123 12:11:11.149504 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" podStartSLOduration=5.149483223 podStartE2EDuration="5.149483223s" podCreationTimestamp="2026-01-23 12:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:11.143776333 +0000 UTC m=+1115.312848559" watchObservedRunningTime="2026-01-23 12:11:11.149483223 +0000 UTC m=+1115.318555449" Jan 23 12:11:12 crc kubenswrapper[4865]: I0123 12:11:12.141437 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93","Type":"ContainerStarted","Data":"d6edc69a4412ec5b5b731338e1698e5dd70b775db1f55bca186e69af6ad9982d"} Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.182369 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf2348b4-7647-4711-83a3-e4dbb907b9b6","Type":"ContainerStarted","Data":"1f9cc1caf9bc99e74490360b09e92618531b67c09fa16454d6d069c548e0a461"} Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.183024 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" containerName="glance-log" containerID="cri-o://4c4d9b9292e80bb7a6bed2996c395d58761d92d2b21aa8b14a8c488a93a553db" gracePeriod=30 Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.183539 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" containerName="glance-httpd" containerID="cri-o://1f9cc1caf9bc99e74490360b09e92618531b67c09fa16454d6d069c548e0a461" gracePeriod=30 Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.212515 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.21249829 podStartE2EDuration="7.21249829s" podCreationTimestamp="2026-01-23 12:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:13.211206309 +0000 UTC m=+1117.380278535" watchObservedRunningTime="2026-01-23 12:11:13.21249829 +0000 UTC m=+1117.381570516" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.697239 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7999dc6947-8xp26"] Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.743664 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d44bd7746-lpzlt"] Jan 23 12:11:13 crc kubenswrapper[4865]: E0123 12:11:13.744103 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b740f7-64fd-4ee7-abe7-f87b66f0b12a" containerName="init" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.744116 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b740f7-64fd-4ee7-abe7-f87b66f0b12a" containerName="init" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.744293 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b740f7-64fd-4ee7-abe7-f87b66f0b12a" containerName="init" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.745308 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.750634 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.758573 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d44bd7746-lpzlt"] Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.818280 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-config-data\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.818350 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9brh\" (UniqueName: \"kubernetes.io/projected/581ecfce-2612-48aa-beeb-a41024ef2b6b-kube-api-access-d9brh\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.818385 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/581ecfce-2612-48aa-beeb-a41024ef2b6b-logs\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.818404 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-secret-key\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.818420 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-tls-certs\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.818458 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-scripts\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.818485 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-combined-ca-bundle\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.869082 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-585f76ff69-w25jz"] Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.907541 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66f7b94cdb-f7pw2"] Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.909485 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.919691 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-scripts\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.919742 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-combined-ca-bundle\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.919810 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-config-data\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.919851 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9brh\" (UniqueName: \"kubernetes.io/projected/581ecfce-2612-48aa-beeb-a41024ef2b6b-kube-api-access-d9brh\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.919886 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/581ecfce-2612-48aa-beeb-a41024ef2b6b-logs\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.919913 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-secret-key\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.919931 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-tls-certs\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.921116 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-scripts\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.923333 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/581ecfce-2612-48aa-beeb-a41024ef2b6b-logs\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.925158 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66f7b94cdb-f7pw2"] Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.931519 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-combined-ca-bundle\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.931912 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-secret-key\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.932369 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-tls-certs\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.936178 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-config-data\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:13 crc kubenswrapper[4865]: I0123 12:11:13.977399 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9brh\" (UniqueName: \"kubernetes.io/projected/581ecfce-2612-48aa-beeb-a41024ef2b6b-kube-api-access-d9brh\") pod \"horizon-7d44bd7746-lpzlt\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.021013 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98cc6a2c-601d-49ae-8d9c-da49869b3639-logs\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.021059 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cc6a2c-601d-49ae-8d9c-da49869b3639-combined-ca-bundle\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.021113 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98cc6a2c-601d-49ae-8d9c-da49869b3639-horizon-secret-key\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.021135 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98cc6a2c-601d-49ae-8d9c-da49869b3639-config-data\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.021176 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/98cc6a2c-601d-49ae-8d9c-da49869b3639-horizon-tls-certs\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.021210 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfvgk\" (UniqueName: \"kubernetes.io/projected/98cc6a2c-601d-49ae-8d9c-da49869b3639-kube-api-access-bfvgk\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.021227 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98cc6a2c-601d-49ae-8d9c-da49869b3639-scripts\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.088362 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.124430 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98cc6a2c-601d-49ae-8d9c-da49869b3639-logs\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.124483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cc6a2c-601d-49ae-8d9c-da49869b3639-combined-ca-bundle\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.124544 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98cc6a2c-601d-49ae-8d9c-da49869b3639-horizon-secret-key\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.124572 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98cc6a2c-601d-49ae-8d9c-da49869b3639-config-data\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.124647 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/98cc6a2c-601d-49ae-8d9c-da49869b3639-horizon-tls-certs\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.124688 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfvgk\" (UniqueName: \"kubernetes.io/projected/98cc6a2c-601d-49ae-8d9c-da49869b3639-kube-api-access-bfvgk\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.124708 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98cc6a2c-601d-49ae-8d9c-da49869b3639-scripts\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.125397 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/98cc6a2c-601d-49ae-8d9c-da49869b3639-scripts\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.126133 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98cc6a2c-601d-49ae-8d9c-da49869b3639-logs\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.126756 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/98cc6a2c-601d-49ae-8d9c-da49869b3639-config-data\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.129536 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98cc6a2c-601d-49ae-8d9c-da49869b3639-combined-ca-bundle\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.141751 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/98cc6a2c-601d-49ae-8d9c-da49869b3639-horizon-secret-key\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.148045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/98cc6a2c-601d-49ae-8d9c-da49869b3639-horizon-tls-certs\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.151959 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfvgk\" (UniqueName: \"kubernetes.io/projected/98cc6a2c-601d-49ae-8d9c-da49869b3639-kube-api-access-bfvgk\") pod \"horizon-66f7b94cdb-f7pw2\" (UID: \"98cc6a2c-601d-49ae-8d9c-da49869b3639\") " pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.209122 4865 generic.go:334] "Generic (PLEG): container finished" podID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" containerID="1f9cc1caf9bc99e74490360b09e92618531b67c09fa16454d6d069c548e0a461" exitCode=0 Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.209156 4865 generic.go:334] "Generic (PLEG): container finished" podID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" containerID="4c4d9b9292e80bb7a6bed2996c395d58761d92d2b21aa8b14a8c488a93a553db" exitCode=143 Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.209202 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf2348b4-7647-4711-83a3-e4dbb907b9b6","Type":"ContainerDied","Data":"1f9cc1caf9bc99e74490360b09e92618531b67c09fa16454d6d069c548e0a461"} Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.209232 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf2348b4-7647-4711-83a3-e4dbb907b9b6","Type":"ContainerDied","Data":"4c4d9b9292e80bb7a6bed2996c395d58761d92d2b21aa8b14a8c488a93a553db"} Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.215443 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93","Type":"ContainerStarted","Data":"755c2f513509ef9ea9994cb41e7e2e6e8949e46b66c532c779088f703867549e"} Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.215616 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" containerName="glance-log" containerID="cri-o://d6edc69a4412ec5b5b731338e1698e5dd70b775db1f55bca186e69af6ad9982d" gracePeriod=30 Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.216089 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" containerName="glance-httpd" containerID="cri-o://755c2f513509ef9ea9994cb41e7e2e6e8949e46b66c532c779088f703867549e" gracePeriod=30 Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.247432 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.247409287 podStartE2EDuration="8.247409287s" podCreationTimestamp="2026-01-23 12:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:14.242050005 +0000 UTC m=+1118.411122231" watchObservedRunningTime="2026-01-23 12:11:14.247409287 +0000 UTC m=+1118.416481503" Jan 23 12:11:14 crc kubenswrapper[4865]: I0123 12:11:14.354118 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:15 crc kubenswrapper[4865]: I0123 12:11:15.231529 4865 generic.go:334] "Generic (PLEG): container finished" podID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" containerID="755c2f513509ef9ea9994cb41e7e2e6e8949e46b66c532c779088f703867549e" exitCode=0 Jan 23 12:11:15 crc kubenswrapper[4865]: I0123 12:11:15.231572 4865 generic.go:334] "Generic (PLEG): container finished" podID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" containerID="d6edc69a4412ec5b5b731338e1698e5dd70b775db1f55bca186e69af6ad9982d" exitCode=143 Jan 23 12:11:15 crc kubenswrapper[4865]: I0123 12:11:15.231622 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93","Type":"ContainerDied","Data":"755c2f513509ef9ea9994cb41e7e2e6e8949e46b66c532c779088f703867549e"} Jan 23 12:11:15 crc kubenswrapper[4865]: I0123 12:11:15.231670 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93","Type":"ContainerDied","Data":"d6edc69a4412ec5b5b731338e1698e5dd70b775db1f55bca186e69af6ad9982d"} Jan 23 12:11:16 crc kubenswrapper[4865]: I0123 12:11:16.248113 4865 generic.go:334] "Generic (PLEG): container finished" podID="0b310659-1057-49d4-9016-572d3f4b031e" containerID="96b2d55ead926d291c13e7e950bbbbb89fd4d945bceaa2ae8877643952a6aa29" exitCode=0 Jan 23 12:11:16 crc kubenswrapper[4865]: I0123 12:11:16.248190 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntg2v" event={"ID":"0b310659-1057-49d4-9016-572d3f4b031e","Type":"ContainerDied","Data":"96b2d55ead926d291c13e7e950bbbbb89fd4d945bceaa2ae8877643952a6aa29"} Jan 23 12:11:17 crc kubenswrapper[4865]: I0123 12:11:17.038789 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:11:17 crc kubenswrapper[4865]: I0123 12:11:17.183119 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c996dc455-pwf2q"] Jan 23 12:11:17 crc kubenswrapper[4865]: I0123 12:11:17.183666 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="dnsmasq-dns" containerID="cri-o://910dfc6635f1736615be07a4a6897013d42c0c2b0437524c3a4d62a4b4884e17" gracePeriod=10 Jan 23 12:11:18 crc kubenswrapper[4865]: I0123 12:11:18.408062 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 23 12:11:19 crc kubenswrapper[4865]: I0123 12:11:19.287888 4865 generic.go:334] "Generic (PLEG): container finished" podID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerID="910dfc6635f1736615be07a4a6897013d42c0c2b0437524c3a4d62a4b4884e17" exitCode=0 Jan 23 12:11:19 crc kubenswrapper[4865]: I0123 12:11:19.287926 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" event={"ID":"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5","Type":"ContainerDied","Data":"910dfc6635f1736615be07a4a6897013d42c0c2b0437524c3a4d62a4b4884e17"} Jan 23 12:11:23 crc kubenswrapper[4865]: I0123 12:11:23.407300 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.060243 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.060818 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.060947 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5ch57ch554h5f7h5f9h677hf5h557h6fh78h58h666hc5hcdh5f7hch68dh587h5c7h546h584h55ch5d8h5dbh5f8h9chb9h577h556h566h696h57fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nhfll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-585f76ff69-w25jz_openstack(99029342-24ed-451c-8f14-9deb0cfc16f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.065384 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb\\\"\"]" pod="openstack/horizon-585f76ff69-w25jz" podUID="99029342-24ed-451c-8f14-9deb0cfc16f0" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.100699 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.100756 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.100861 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66chb9h547h5c9h5h67bh685h56bh688h5bch5fch674h566hb6h679h684hc4h594h5hb4hbchd7h5cch654h677h55fhbfhfch64ch647h67fhdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9f5cl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6d47ff97-wl986_openstack(74713871-78be-40f9-97f5-13282a5bfe9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.103384 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-horizon:c3923531bcda0b0811b2d5053f189beb\\\"\"]" pod="openstack/horizon-6d47ff97-wl986" podUID="74713871-78be-40f9-97f5-13282a5bfe9e" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.691399 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-ceilometer-central:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.691454 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-ceilometer-central:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:27 crc kubenswrapper[4865]: E0123 12:11:27.691560 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-ceilometer-central:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndbh58ch64h68fh7bh56fh55hffh5fdh67bhcchc8hf5h56bh554hdh5dch668hb9h5b5hf5hc9h8hf4h67h5b4h597h54h647h77hd5h68q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nxpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(8d804805-bf09-488a-80cc-ddda0ba1d466): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:11:28 crc kubenswrapper[4865]: I0123 12:11:28.407433 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 23 12:11:28 crc kubenswrapper[4865]: I0123 12:11:28.407854 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:11:29 crc kubenswrapper[4865]: E0123 12:11:29.130720 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-placement-api:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:29 crc kubenswrapper[4865]: E0123 12:11:29.130802 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-placement-api:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:29 crc kubenswrapper[4865]: E0123 12:11:29.131373 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-placement-api:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27qvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-scpxv_openstack(84103afc-63d9-416c-bc51-729cd8c6eeed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:11:29 crc kubenswrapper[4865]: E0123 12:11:29.132546 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-scpxv" podUID="84103afc-63d9-416c-bc51-729cd8c6eeed" Jan 23 12:11:29 crc kubenswrapper[4865]: E0123 12:11:29.400431 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-placement-api:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/placement-db-sync-scpxv" podUID="84103afc-63d9-416c-bc51-729cd8c6eeed" Jan 23 12:11:30 crc kubenswrapper[4865]: I0123 12:11:30.870487 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.015372 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-config-data\") pod \"0b310659-1057-49d4-9016-572d3f4b031e\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.015732 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-scripts\") pod \"0b310659-1057-49d4-9016-572d3f4b031e\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.015771 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-fernet-keys\") pod \"0b310659-1057-49d4-9016-572d3f4b031e\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.016567 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb8v8\" (UniqueName: \"kubernetes.io/projected/0b310659-1057-49d4-9016-572d3f4b031e-kube-api-access-sb8v8\") pod \"0b310659-1057-49d4-9016-572d3f4b031e\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.016628 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-credential-keys\") pod \"0b310659-1057-49d4-9016-572d3f4b031e\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.016645 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-combined-ca-bundle\") pod \"0b310659-1057-49d4-9016-572d3f4b031e\" (UID: \"0b310659-1057-49d4-9016-572d3f4b031e\") " Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.021756 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-scripts" (OuterVolumeSpecName: "scripts") pod "0b310659-1057-49d4-9016-572d3f4b031e" (UID: "0b310659-1057-49d4-9016-572d3f4b031e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.024310 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b310659-1057-49d4-9016-572d3f4b031e-kube-api-access-sb8v8" (OuterVolumeSpecName: "kube-api-access-sb8v8") pod "0b310659-1057-49d4-9016-572d3f4b031e" (UID: "0b310659-1057-49d4-9016-572d3f4b031e"). InnerVolumeSpecName "kube-api-access-sb8v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.030893 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0b310659-1057-49d4-9016-572d3f4b031e" (UID: "0b310659-1057-49d4-9016-572d3f4b031e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.031134 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0b310659-1057-49d4-9016-572d3f4b031e" (UID: "0b310659-1057-49d4-9016-572d3f4b031e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.053992 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-config-data" (OuterVolumeSpecName: "config-data") pod "0b310659-1057-49d4-9016-572d3f4b031e" (UID: "0b310659-1057-49d4-9016-572d3f4b031e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.054997 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b310659-1057-49d4-9016-572d3f4b031e" (UID: "0b310659-1057-49d4-9016-572d3f4b031e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.119879 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.119913 4865 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.119924 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb8v8\" (UniqueName: \"kubernetes.io/projected/0b310659-1057-49d4-9016-572d3f4b031e-kube-api-access-sb8v8\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.119934 4865 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.119944 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.119953 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b310659-1057-49d4-9016-572d3f4b031e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.415657 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ntg2v" event={"ID":"0b310659-1057-49d4-9016-572d3f4b031e","Type":"ContainerDied","Data":"b43f81dc035aefb390c090e487eff821dab8cf633c2c0d8f2d876377afba4e56"} Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.415696 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b43f81dc035aefb390c090e487eff821dab8cf633c2c0d8f2d876377afba4e56" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.415756 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ntg2v" Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.954710 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ntg2v"] Jan 23 12:11:31 crc kubenswrapper[4865]: I0123 12:11:31.963713 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ntg2v"] Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.058386 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dlxs4"] Jan 23 12:11:32 crc kubenswrapper[4865]: E0123 12:11:32.058987 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b310659-1057-49d4-9016-572d3f4b031e" containerName="keystone-bootstrap" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.059283 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b310659-1057-49d4-9016-572d3f4b031e" containerName="keystone-bootstrap" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.059587 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b310659-1057-49d4-9016-572d3f4b031e" containerName="keystone-bootstrap" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.061981 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.071200 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.072007 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.072117 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.072153 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.073221 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nlns" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.079003 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dlxs4"] Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.145931 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-combined-ca-bundle\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.145989 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-fernet-keys\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.146035 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-config-data\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.146090 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-credential-keys\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.146119 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-scripts\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.146157 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvk28\" (UniqueName: \"kubernetes.io/projected/68c0d50f-01a6-4e5c-92e8-626af12ba85a-kube-api-access-rvk28\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.148972 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b310659-1057-49d4-9016-572d3f4b031e" path="/var/lib/kubelet/pods/0b310659-1057-49d4-9016-572d3f4b031e/volumes" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.259317 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-combined-ca-bundle\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.259396 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-fernet-keys\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.259443 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-config-data\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.259500 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-credential-keys\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.259526 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-scripts\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.259567 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvk28\" (UniqueName: \"kubernetes.io/projected/68c0d50f-01a6-4e5c-92e8-626af12ba85a-kube-api-access-rvk28\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.264464 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-fernet-keys\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.265676 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-credential-keys\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.270376 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-config-data\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.271716 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-combined-ca-bundle\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.277999 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-scripts\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.288275 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvk28\" (UniqueName: \"kubernetes.io/projected/68c0d50f-01a6-4e5c-92e8-626af12ba85a-kube-api-access-rvk28\") pod \"keystone-bootstrap-dlxs4\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:32 crc kubenswrapper[4865]: I0123 12:11:32.450593 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:37 crc kubenswrapper[4865]: I0123 12:11:37.700285 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 12:11:37 crc kubenswrapper[4865]: I0123 12:11:37.700751 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 12:11:37 crc kubenswrapper[4865]: I0123 12:11:37.766673 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:37 crc kubenswrapper[4865]: I0123 12:11:37.766735 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:38 crc kubenswrapper[4865]: I0123 12:11:38.407228 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.663187 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.793041 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-scripts\") pod \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.793103 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg7xw\" (UniqueName: \"kubernetes.io/projected/cf2348b4-7647-4711-83a3-e4dbb907b9b6-kube-api-access-pg7xw\") pod \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.793128 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-combined-ca-bundle\") pod \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.793165 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-public-tls-certs\") pod \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.793203 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-httpd-run\") pod \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.793248 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.793390 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-logs\") pod \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.793432 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-config-data\") pod \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\" (UID: \"cf2348b4-7647-4711-83a3-e4dbb907b9b6\") " Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.794570 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cf2348b4-7647-4711-83a3-e4dbb907b9b6" (UID: "cf2348b4-7647-4711-83a3-e4dbb907b9b6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.794751 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-logs" (OuterVolumeSpecName: "logs") pod "cf2348b4-7647-4711-83a3-e4dbb907b9b6" (UID: "cf2348b4-7647-4711-83a3-e4dbb907b9b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.802897 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-scripts" (OuterVolumeSpecName: "scripts") pod "cf2348b4-7647-4711-83a3-e4dbb907b9b6" (UID: "cf2348b4-7647-4711-83a3-e4dbb907b9b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.804784 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "cf2348b4-7647-4711-83a3-e4dbb907b9b6" (UID: "cf2348b4-7647-4711-83a3-e4dbb907b9b6"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.805031 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2348b4-7647-4711-83a3-e4dbb907b9b6-kube-api-access-pg7xw" (OuterVolumeSpecName: "kube-api-access-pg7xw") pod "cf2348b4-7647-4711-83a3-e4dbb907b9b6" (UID: "cf2348b4-7647-4711-83a3-e4dbb907b9b6"). InnerVolumeSpecName "kube-api-access-pg7xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.827238 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf2348b4-7647-4711-83a3-e4dbb907b9b6" (UID: "cf2348b4-7647-4711-83a3-e4dbb907b9b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.855777 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-config-data" (OuterVolumeSpecName: "config-data") pod "cf2348b4-7647-4711-83a3-e4dbb907b9b6" (UID: "cf2348b4-7647-4711-83a3-e4dbb907b9b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.858471 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cf2348b4-7647-4711-83a3-e4dbb907b9b6" (UID: "cf2348b4-7647-4711-83a3-e4dbb907b9b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.895125 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.895152 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg7xw\" (UniqueName: \"kubernetes.io/projected/cf2348b4-7647-4711-83a3-e4dbb907b9b6-kube-api-access-pg7xw\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.895167 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.895176 4865 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.895184 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.895218 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.895233 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2348b4-7647-4711-83a3-e4dbb907b9b6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.895242 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2348b4-7647-4711-83a3-e4dbb907b9b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.913146 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 23 12:11:39 crc kubenswrapper[4865]: I0123 12:11:39.996452 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.494312 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf2348b4-7647-4711-83a3-e4dbb907b9b6","Type":"ContainerDied","Data":"f2cee00b8dee561838979a07c60826768a567a771de1909410ad6ff3fcb4942c"} Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.494647 4865 scope.go:117] "RemoveContainer" containerID="1f9cc1caf9bc99e74490360b09e92618531b67c09fa16454d6d069c548e0a461" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.494375 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.520694 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.529157 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.543504 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:11:40 crc kubenswrapper[4865]: E0123 12:11:40.543929 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" containerName="glance-httpd" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.543947 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" containerName="glance-httpd" Jan 23 12:11:40 crc kubenswrapper[4865]: E0123 12:11:40.543981 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" containerName="glance-log" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.543989 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" containerName="glance-log" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.544194 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" containerName="glance-log" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.544238 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" containerName="glance-httpd" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.545208 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.548715 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.549217 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.565725 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.708206 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.708299 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.708353 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.708383 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.708457 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87csn\" (UniqueName: \"kubernetes.io/projected/dc78e553-ea01-4581-b947-c4cff5f2ba13-kube-api-access-87csn\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.708533 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-logs\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.708573 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.708638 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.810190 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.810264 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.810294 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.810364 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87csn\" (UniqueName: \"kubernetes.io/projected/dc78e553-ea01-4581-b947-c4cff5f2ba13-kube-api-access-87csn\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.810405 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-logs\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.810444 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.810473 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.810507 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.810918 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.811310 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-logs\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.812737 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.819703 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.822303 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.822736 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.833022 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.843926 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87csn\" (UniqueName: \"kubernetes.io/projected/dc78e553-ea01-4581-b947-c4cff5f2ba13-kube-api-access-87csn\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.846165 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.862866 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: E0123 12:11:40.891797 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-heat-engine:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:40 crc kubenswrapper[4865]: E0123 12:11:40.891856 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-heat-engine:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:40 crc kubenswrapper[4865]: E0123 12:11:40.891985 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-heat-engine:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltjwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-jw9z7_openstack(3e6117d5-9df1-4299-8358-d7235d7847d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:11:40 crc kubenswrapper[4865]: E0123 12:11:40.893294 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-jw9z7" podUID="3e6117d5-9df1-4299-8358-d7235d7847d2" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.931536 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.938936 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:40 crc kubenswrapper[4865]: I0123 12:11:40.944030 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012393 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-config-data\") pod \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012439 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pwk7\" (UniqueName: \"kubernetes.io/projected/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-kube-api-access-7pwk7\") pod \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012468 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-scripts\") pod \"99029342-24ed-451c-8f14-9deb0cfc16f0\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012492 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-httpd-run\") pod \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012520 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-combined-ca-bundle\") pod \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012557 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f5cl\" (UniqueName: \"kubernetes.io/projected/74713871-78be-40f9-97f5-13282a5bfe9e-kube-api-access-9f5cl\") pod \"74713871-78be-40f9-97f5-13282a5bfe9e\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012586 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-scripts\") pod \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012641 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74713871-78be-40f9-97f5-13282a5bfe9e-logs\") pod \"74713871-78be-40f9-97f5-13282a5bfe9e\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012665 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74713871-78be-40f9-97f5-13282a5bfe9e-horizon-secret-key\") pod \"74713871-78be-40f9-97f5-13282a5bfe9e\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012699 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-logs\") pod \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012753 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/99029342-24ed-451c-8f14-9deb0cfc16f0-horizon-secret-key\") pod \"99029342-24ed-451c-8f14-9deb0cfc16f0\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012776 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99029342-24ed-451c-8f14-9deb0cfc16f0-logs\") pod \"99029342-24ed-451c-8f14-9deb0cfc16f0\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012790 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-config-data\") pod \"74713871-78be-40f9-97f5-13282a5bfe9e\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012810 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-internal-tls-certs\") pod \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012836 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\" (UID: \"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.012854 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-config-data\") pod \"99029342-24ed-451c-8f14-9deb0cfc16f0\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.013655 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74713871-78be-40f9-97f5-13282a5bfe9e-logs" (OuterVolumeSpecName: "logs") pod "74713871-78be-40f9-97f5-13282a5bfe9e" (UID: "74713871-78be-40f9-97f5-13282a5bfe9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.013845 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-scripts\") pod \"74713871-78be-40f9-97f5-13282a5bfe9e\" (UID: \"74713871-78be-40f9-97f5-13282a5bfe9e\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.013906 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhfll\" (UniqueName: \"kubernetes.io/projected/99029342-24ed-451c-8f14-9deb0cfc16f0-kube-api-access-nhfll\") pod \"99029342-24ed-451c-8f14-9deb0cfc16f0\" (UID: \"99029342-24ed-451c-8f14-9deb0cfc16f0\") " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.014345 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99029342-24ed-451c-8f14-9deb0cfc16f0-logs" (OuterVolumeSpecName: "logs") pod "99029342-24ed-451c-8f14-9deb0cfc16f0" (UID: "99029342-24ed-451c-8f14-9deb0cfc16f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.014729 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" (UID: "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.014799 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/99029342-24ed-451c-8f14-9deb0cfc16f0-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.014817 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74713871-78be-40f9-97f5-13282a5bfe9e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.015142 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-scripts" (OuterVolumeSpecName: "scripts") pod "99029342-24ed-451c-8f14-9deb0cfc16f0" (UID: "99029342-24ed-451c-8f14-9deb0cfc16f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.015464 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-logs" (OuterVolumeSpecName: "logs") pod "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" (UID: "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.015748 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-config-data" (OuterVolumeSpecName: "config-data") pod "99029342-24ed-451c-8f14-9deb0cfc16f0" (UID: "99029342-24ed-451c-8f14-9deb0cfc16f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.016341 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-scripts" (OuterVolumeSpecName: "scripts") pod "74713871-78be-40f9-97f5-13282a5bfe9e" (UID: "74713871-78be-40f9-97f5-13282a5bfe9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.019778 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" (UID: "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.020339 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-config-data" (OuterVolumeSpecName: "config-data") pod "74713871-78be-40f9-97f5-13282a5bfe9e" (UID: "74713871-78be-40f9-97f5-13282a5bfe9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.025519 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74713871-78be-40f9-97f5-13282a5bfe9e-kube-api-access-9f5cl" (OuterVolumeSpecName: "kube-api-access-9f5cl") pod "74713871-78be-40f9-97f5-13282a5bfe9e" (UID: "74713871-78be-40f9-97f5-13282a5bfe9e"). InnerVolumeSpecName "kube-api-access-9f5cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.026571 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99029342-24ed-451c-8f14-9deb0cfc16f0-kube-api-access-nhfll" (OuterVolumeSpecName: "kube-api-access-nhfll") pod "99029342-24ed-451c-8f14-9deb0cfc16f0" (UID: "99029342-24ed-451c-8f14-9deb0cfc16f0"). InnerVolumeSpecName "kube-api-access-nhfll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.027784 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74713871-78be-40f9-97f5-13282a5bfe9e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "74713871-78be-40f9-97f5-13282a5bfe9e" (UID: "74713871-78be-40f9-97f5-13282a5bfe9e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.029063 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-scripts" (OuterVolumeSpecName: "scripts") pod "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" (UID: "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.032130 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-kube-api-access-7pwk7" (OuterVolumeSpecName: "kube-api-access-7pwk7") pod "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" (UID: "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93"). InnerVolumeSpecName "kube-api-access-7pwk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.038720 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99029342-24ed-451c-8f14-9deb0cfc16f0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "99029342-24ed-451c-8f14-9deb0cfc16f0" (UID: "99029342-24ed-451c-8f14-9deb0cfc16f0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.054480 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" (UID: "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.077518 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-config-data" (OuterVolumeSpecName: "config-data") pod "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" (UID: "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.100415 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" (UID: "143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116072 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/74713871-78be-40f9-97f5-13282a5bfe9e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116130 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116148 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/99029342-24ed-451c-8f14-9deb0cfc16f0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116157 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116213 4865 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116245 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116257 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116267 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/74713871-78be-40f9-97f5-13282a5bfe9e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116276 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhfll\" (UniqueName: \"kubernetes.io/projected/99029342-24ed-451c-8f14-9deb0cfc16f0-kube-api-access-nhfll\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116285 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116296 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pwk7\" (UniqueName: \"kubernetes.io/projected/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-kube-api-access-7pwk7\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116303 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99029342-24ed-451c-8f14-9deb0cfc16f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116312 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116320 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116328 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f5cl\" (UniqueName: \"kubernetes.io/projected/74713871-78be-40f9-97f5-13282a5bfe9e-kube-api-access-9f5cl\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.116336 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.140659 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.219122 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.503479 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d47ff97-wl986" event={"ID":"74713871-78be-40f9-97f5-13282a5bfe9e","Type":"ContainerDied","Data":"b9c9ffe925db7abc9601dc90e1ed303f0ee7b244d5bf0b04c6d43aeb416fc3e8"} Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.503588 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d47ff97-wl986" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.504764 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-585f76ff69-w25jz" event={"ID":"99029342-24ed-451c-8f14-9deb0cfc16f0","Type":"ContainerDied","Data":"ea783795babf6997d935a05a05c87c5b6a5557872d1b2565d481d3a808962c4d"} Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.504815 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-585f76ff69-w25jz" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.506697 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93","Type":"ContainerDied","Data":"ae09c6bc1ea032bc2c35415d6e91693ea3cc2596f033fff784024cea79c2c79d"} Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.506713 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: E0123 12:11:41.508339 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-heat-engine:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/heat-db-sync-jw9z7" podUID="3e6117d5-9df1-4299-8358-d7235d7847d2" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.596647 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d47ff97-wl986"] Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.606270 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d47ff97-wl986"] Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.638031 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-585f76ff69-w25jz"] Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.647571 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-585f76ff69-w25jz"] Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.654933 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.662612 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.672156 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:11:41 crc kubenswrapper[4865]: E0123 12:11:41.672573 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" containerName="glance-log" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.672589 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" containerName="glance-log" Jan 23 12:11:41 crc kubenswrapper[4865]: E0123 12:11:41.672624 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" containerName="glance-httpd" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.672632 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" containerName="glance-httpd" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.672830 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" containerName="glance-httpd" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.672862 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" containerName="glance-log" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.673812 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.675960 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.676344 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.680866 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.829652 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgjl8\" (UniqueName: \"kubernetes.io/projected/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-kube-api-access-wgjl8\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.829718 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.829742 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.829761 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.829783 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.829807 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.829824 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.829873 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.934251 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgjl8\" (UniqueName: \"kubernetes.io/projected/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-kube-api-access-wgjl8\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.934316 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.934338 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.934355 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.934380 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.934439 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.934465 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.934520 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.935625 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-logs\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.935758 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.935959 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.942338 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.943580 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.952146 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.951408 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.960531 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgjl8\" (UniqueName: \"kubernetes.io/projected/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-kube-api-access-wgjl8\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.974285 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:11:41 crc kubenswrapper[4865]: I0123 12:11:41.990768 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:42 crc kubenswrapper[4865]: I0123 12:11:42.128035 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93" path="/var/lib/kubelet/pods/143f1d3b-55e7-4fa6-a8e0-4d0bb9a70b93/volumes" Jan 23 12:11:42 crc kubenswrapper[4865]: I0123 12:11:42.128702 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74713871-78be-40f9-97f5-13282a5bfe9e" path="/var/lib/kubelet/pods/74713871-78be-40f9-97f5-13282a5bfe9e/volumes" Jan 23 12:11:42 crc kubenswrapper[4865]: I0123 12:11:42.129110 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99029342-24ed-451c-8f14-9deb0cfc16f0" path="/var/lib/kubelet/pods/99029342-24ed-451c-8f14-9deb0cfc16f0/volumes" Jan 23 12:11:42 crc kubenswrapper[4865]: I0123 12:11:42.129989 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2348b4-7647-4711-83a3-e4dbb907b9b6" path="/var/lib/kubelet/pods/cf2348b4-7647-4711-83a3-e4dbb907b9b6/volumes" Jan 23 12:11:43 crc kubenswrapper[4865]: E0123 12:11:43.087819 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-barbican-api:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:43 crc kubenswrapper[4865]: E0123 12:11:43.088193 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-barbican-api:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:43 crc kubenswrapper[4865]: E0123 12:11:43.088352 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-barbican-api:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-277q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xqdv2_openstack(1dbb30bd-db3b-48a2-96dd-6193b6a7ab90): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:11:43 crc kubenswrapper[4865]: E0123 12:11:43.089985 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xqdv2" podUID="1dbb30bd-db3b-48a2-96dd-6193b6a7ab90" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.183215 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.267398 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-config\") pod \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.268755 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2wb9\" (UniqueName: \"kubernetes.io/projected/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-kube-api-access-f2wb9\") pod \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.268804 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-swift-storage-0\") pod \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.268835 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-nb\") pod \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.268986 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-sb\") pod \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.269034 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-svc\") pod \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\" (UID: \"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5\") " Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.274555 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-kube-api-access-f2wb9" (OuterVolumeSpecName: "kube-api-access-f2wb9") pod "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" (UID: "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5"). InnerVolumeSpecName "kube-api-access-f2wb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.330302 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" (UID: "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.336826 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" (UID: "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.345939 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" (UID: "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.371357 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.371388 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2wb9\" (UniqueName: \"kubernetes.io/projected/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-kube-api-access-f2wb9\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.371399 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.371408 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.384401 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" (UID: "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.406982 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-config" (OuterVolumeSpecName: "config") pod "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" (UID: "6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.408025 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.475145 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.475186 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.530438 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" event={"ID":"6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5","Type":"ContainerDied","Data":"3ed433f5a667064b682d158867e8d2203268ab14e685f0398f86b589d01ff3de"} Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.530453 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c996dc455-pwf2q" Jan 23 12:11:43 crc kubenswrapper[4865]: E0123 12:11:43.533625 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-barbican-api:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/barbican-db-sync-xqdv2" podUID="1dbb30bd-db3b-48a2-96dd-6193b6a7ab90" Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.569234 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66f7b94cdb-f7pw2"] Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.590109 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c996dc455-pwf2q"] Jan 23 12:11:43 crc kubenswrapper[4865]: I0123 12:11:43.599553 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c996dc455-pwf2q"] Jan 23 12:11:43 crc kubenswrapper[4865]: E0123 12:11:43.698282 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fc82a0b_b3fb_42ea_b23f_a5459a2d00f5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fc82a0b_b3fb_42ea_b23f_a5459a2d00f5.slice/crio-3ed433f5a667064b682d158867e8d2203268ab14e685f0398f86b589d01ff3de\": RecentStats: unable to find data in memory cache]" Jan 23 12:11:44 crc kubenswrapper[4865]: I0123 12:11:44.130459 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" path="/var/lib/kubelet/pods/6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5/volumes" Jan 23 12:11:44 crc kubenswrapper[4865]: E0123 12:11:44.394349 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:44 crc kubenswrapper[4865]: E0123 12:11:44.394410 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:11:44 crc kubenswrapper[4865]: E0123 12:11:44.394651 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnrpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-sxqmn_openstack(afab83a5-8e47-4531-80de-ae69dfd11bd9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:11:44 crc kubenswrapper[4865]: E0123 12:11:44.396125 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-sxqmn" podUID="afab83a5-8e47-4531-80de-ae69dfd11bd9" Jan 23 12:11:44 crc kubenswrapper[4865]: E0123 12:11:44.540154 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-cinder-api:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/cinder-db-sync-sxqmn" podUID="afab83a5-8e47-4531-80de-ae69dfd11bd9" Jan 23 12:11:44 crc kubenswrapper[4865]: I0123 12:11:44.726815 4865 scope.go:117] "RemoveContainer" containerID="4c4d9b9292e80bb7a6bed2996c395d58761d92d2b21aa8b14a8c488a93a553db" Jan 23 12:11:44 crc kubenswrapper[4865]: I0123 12:11:44.847356 4865 scope.go:117] "RemoveContainer" containerID="755c2f513509ef9ea9994cb41e7e2e6e8949e46b66c532c779088f703867549e" Jan 23 12:11:44 crc kubenswrapper[4865]: I0123 12:11:44.909205 4865 scope.go:117] "RemoveContainer" containerID="d6edc69a4412ec5b5b731338e1698e5dd70b775db1f55bca186e69af6ad9982d" Jan 23 12:11:44 crc kubenswrapper[4865]: I0123 12:11:44.999168 4865 scope.go:117] "RemoveContainer" containerID="910dfc6635f1736615be07a4a6897013d42c0c2b0437524c3a4d62a4b4884e17" Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.029313 4865 scope.go:117] "RemoveContainer" containerID="afbb4a64e4bb4f0d845051e9923efdaec8d582d3ba6463f4b1125771b319df76" Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.054864 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d44bd7746-lpzlt"] Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.351499 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.402134 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dlxs4"] Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.454827 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.458229 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 23 12:11:45 crc kubenswrapper[4865]: W0123 12:11:45.466440 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc78e553_ea01_4581_b947_c4cff5f2ba13.slice/crio-ee2d132745737c55d7577766afd93dffd44962b36efa90f13e080713e154b1b6 WatchSource:0}: Error finding container ee2d132745737c55d7577766afd93dffd44962b36efa90f13e080713e154b1b6: Status 404 returned error can't find the container with id ee2d132745737c55d7577766afd93dffd44962b36efa90f13e080713e154b1b6 Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.566585 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eec67cc0-b9ae-4767-82b9-ffa764ab07d0","Type":"ContainerStarted","Data":"9f161d4ec669b2c0808b7da88a17938158076df80bb306c9c7a36ad020e8da6a"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.570891 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7999dc6947-8xp26" event={"ID":"f0a0ac97-b923-44ce-8b90-31965497a560","Type":"ContainerStarted","Data":"32cfc548e09beda78485f718018e8e94ae7cb9906ccb77f8f7d9c77928c1d3ed"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.570992 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7999dc6947-8xp26" event={"ID":"f0a0ac97-b923-44ce-8b90-31965497a560","Type":"ContainerStarted","Data":"e5aadf1e3aa11acb349489c22739155115814b4e9ef4e90e3701e7d6018b2a69"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.571136 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7999dc6947-8xp26" podUID="f0a0ac97-b923-44ce-8b90-31965497a560" containerName="horizon-log" containerID="cri-o://e5aadf1e3aa11acb349489c22739155115814b4e9ef4e90e3701e7d6018b2a69" gracePeriod=30 Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.572481 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7999dc6947-8xp26" podUID="f0a0ac97-b923-44ce-8b90-31965497a560" containerName="horizon" containerID="cri-o://32cfc548e09beda78485f718018e8e94ae7cb9906ccb77f8f7d9c77928c1d3ed" gracePeriod=30 Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.586725 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-scpxv" event={"ID":"84103afc-63d9-416c-bc51-729cd8c6eeed","Type":"ContainerStarted","Data":"d2105a12a7447eed63fffef6151979b3cefd11a2991c7c854b929bf968cf83ab"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.605178 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlxs4" event={"ID":"68c0d50f-01a6-4e5c-92e8-626af12ba85a","Type":"ContainerStarted","Data":"3cd2125223671d9a926ee99d37f5467bccda8df00de2d1f0a150687f14e2e0af"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.608293 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7999dc6947-8xp26" podStartSLOduration=5.231184168 podStartE2EDuration="39.608275372s" podCreationTimestamp="2026-01-23 12:11:06 +0000 UTC" firstStartedPulling="2026-01-23 12:11:08.686912146 +0000 UTC m=+1112.855984362" lastFinishedPulling="2026-01-23 12:11:43.06400334 +0000 UTC m=+1147.233075566" observedRunningTime="2026-01-23 12:11:45.595470958 +0000 UTC m=+1149.764543184" watchObservedRunningTime="2026-01-23 12:11:45.608275372 +0000 UTC m=+1149.777347598" Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.616432 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerStarted","Data":"a9f5b45dcc5b04b3bf3ecb6680aae49876c6d666882bf3eeb621de8ccd4a8a85"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.616486 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerStarted","Data":"fc4db0e600059eee9eabcc3820b513d6ea23e26f15324de6b742c4fec5ec42ac"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.619207 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerStarted","Data":"374cfbc4973d1db6573e1de86b64036956c8e20a8e1c4509c68c4283e2833d30"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.619270 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerStarted","Data":"24c573c02fdd0051b6038ffa286bb37216868382c095ce2a18d56cbf378c3048"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.619280 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerStarted","Data":"0d30019aff75d7d6b6b32e59842e8b2bfbef17e7ee613557a10f1f6afbc6c6cd"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.621488 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d804805-bf09-488a-80cc-ddda0ba1d466","Type":"ContainerStarted","Data":"a1ba41f3314b057b5cae526ec0277f2a4d1fc8878115e20cde84aefec1e0fa98"} Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.624521 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-scpxv" podStartSLOduration=2.422925349 podStartE2EDuration="39.62450334s" podCreationTimestamp="2026-01-23 12:11:06 +0000 UTC" firstStartedPulling="2026-01-23 12:11:08.053985159 +0000 UTC m=+1112.223057385" lastFinishedPulling="2026-01-23 12:11:45.25556316 +0000 UTC m=+1149.424635376" observedRunningTime="2026-01-23 12:11:45.61308053 +0000 UTC m=+1149.782152756" watchObservedRunningTime="2026-01-23 12:11:45.62450334 +0000 UTC m=+1149.793575566" Jan 23 12:11:45 crc kubenswrapper[4865]: I0123 12:11:45.631019 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc78e553-ea01-4581-b947-c4cff5f2ba13","Type":"ContainerStarted","Data":"ee2d132745737c55d7577766afd93dffd44962b36efa90f13e080713e154b1b6"} Jan 23 12:11:46 crc kubenswrapper[4865]: I0123 12:11:46.162160 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66f7b94cdb-f7pw2" podStartSLOduration=33.162142229 podStartE2EDuration="33.162142229s" podCreationTimestamp="2026-01-23 12:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:45.649410752 +0000 UTC m=+1149.818482978" watchObservedRunningTime="2026-01-23 12:11:46.162142229 +0000 UTC m=+1150.331214455" Jan 23 12:11:46 crc kubenswrapper[4865]: I0123 12:11:46.642302 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlxs4" event={"ID":"68c0d50f-01a6-4e5c-92e8-626af12ba85a","Type":"ContainerStarted","Data":"fb4e00a76b2e2eb4ed64c98986b511131a2e4fa10c3384f693c9bd331275ac7d"} Jan 23 12:11:46 crc kubenswrapper[4865]: I0123 12:11:46.647499 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerStarted","Data":"ad0bd0b06faa3989d6d91f836137fa93ac3878b4dcf0b308bb72332eb709161b"} Jan 23 12:11:46 crc kubenswrapper[4865]: I0123 12:11:46.658063 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc78e553-ea01-4581-b947-c4cff5f2ba13","Type":"ContainerStarted","Data":"7c5512fb92748e26a079515280f375e6f2b357d6837ba7f5d52f4e787ff04d46"} Jan 23 12:11:46 crc kubenswrapper[4865]: I0123 12:11:46.661485 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eec67cc0-b9ae-4767-82b9-ffa764ab07d0","Type":"ContainerStarted","Data":"e8ae1337416c44ff566331434359686312e110501cf8946a2f15b5443c64207e"} Jan 23 12:11:46 crc kubenswrapper[4865]: I0123 12:11:46.698405 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dlxs4" podStartSLOduration=14.698386503 podStartE2EDuration="14.698386503s" podCreationTimestamp="2026-01-23 12:11:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:46.677513231 +0000 UTC m=+1150.846585457" watchObservedRunningTime="2026-01-23 12:11:46.698386503 +0000 UTC m=+1150.867458729" Jan 23 12:11:46 crc kubenswrapper[4865]: I0123 12:11:46.738663 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7d44bd7746-lpzlt" podStartSLOduration=33.738647871 podStartE2EDuration="33.738647871s" podCreationTimestamp="2026-01-23 12:11:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:46.734861058 +0000 UTC m=+1150.903933284" watchObservedRunningTime="2026-01-23 12:11:46.738647871 +0000 UTC m=+1150.907720097" Jan 23 12:11:47 crc kubenswrapper[4865]: I0123 12:11:47.443987 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:11:47 crc kubenswrapper[4865]: I0123 12:11:47.675462 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc78e553-ea01-4581-b947-c4cff5f2ba13","Type":"ContainerStarted","Data":"b99b9fe2bb82811b02471d95a3a65c8db93f84602ed05c7a379db454b7c48f4e"} Jan 23 12:11:47 crc kubenswrapper[4865]: I0123 12:11:47.679709 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eec67cc0-b9ae-4767-82b9-ffa764ab07d0","Type":"ContainerStarted","Data":"894e90243cbb2803719bc08e23ac7ff919f356abcb7852d076b6e090eca33fce"} Jan 23 12:11:47 crc kubenswrapper[4865]: I0123 12:11:47.705504 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.705484928 podStartE2EDuration="7.705484928s" podCreationTimestamp="2026-01-23 12:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:47.699985703 +0000 UTC m=+1151.869057929" watchObservedRunningTime="2026-01-23 12:11:47.705484928 +0000 UTC m=+1151.874557154" Jan 23 12:11:47 crc kubenswrapper[4865]: I0123 12:11:47.737543 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.737519224 podStartE2EDuration="6.737519224s" podCreationTimestamp="2026-01-23 12:11:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:47.725499328 +0000 UTC m=+1151.894571554" watchObservedRunningTime="2026-01-23 12:11:47.737519224 +0000 UTC m=+1151.906591450" Jan 23 12:11:50 crc kubenswrapper[4865]: I0123 12:11:50.863828 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 12:11:50 crc kubenswrapper[4865]: I0123 12:11:50.864203 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 12:11:50 crc kubenswrapper[4865]: I0123 12:11:50.949590 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 12:11:50 crc kubenswrapper[4865]: I0123 12:11:50.949720 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 12:11:51 crc kubenswrapper[4865]: I0123 12:11:51.719237 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 12:11:51 crc kubenswrapper[4865]: I0123 12:11:51.719859 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 12:11:51 crc kubenswrapper[4865]: I0123 12:11:51.991168 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:51 crc kubenswrapper[4865]: I0123 12:11:51.991211 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:52 crc kubenswrapper[4865]: I0123 12:11:52.025220 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:52 crc kubenswrapper[4865]: I0123 12:11:52.036655 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:52 crc kubenswrapper[4865]: I0123 12:11:52.727333 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:52 crc kubenswrapper[4865]: I0123 12:11:52.727384 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:54 crc kubenswrapper[4865]: I0123 12:11:54.090721 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:54 crc kubenswrapper[4865]: I0123 12:11:54.091009 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:11:54 crc kubenswrapper[4865]: I0123 12:11:54.355391 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:54 crc kubenswrapper[4865]: I0123 12:11:54.355949 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:11:55 crc kubenswrapper[4865]: I0123 12:11:55.762203 4865 generic.go:334] "Generic (PLEG): container finished" podID="84103afc-63d9-416c-bc51-729cd8c6eeed" containerID="d2105a12a7447eed63fffef6151979b3cefd11a2991c7c854b929bf968cf83ab" exitCode=0 Jan 23 12:11:55 crc kubenswrapper[4865]: I0123 12:11:55.762253 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-scpxv" event={"ID":"84103afc-63d9-416c-bc51-729cd8c6eeed","Type":"ContainerDied","Data":"d2105a12a7447eed63fffef6151979b3cefd11a2991c7c854b929bf968cf83ab"} Jan 23 12:11:56 crc kubenswrapper[4865]: I0123 12:11:56.468471 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:56 crc kubenswrapper[4865]: I0123 12:11:56.468854 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 12:11:56 crc kubenswrapper[4865]: I0123 12:11:56.471781 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 12:11:56 crc kubenswrapper[4865]: I0123 12:11:56.653321 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 12:11:56 crc kubenswrapper[4865]: I0123 12:11:56.677312 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 12:11:56 crc kubenswrapper[4865]: I0123 12:11:56.775320 4865 generic.go:334] "Generic (PLEG): container finished" podID="68c0d50f-01a6-4e5c-92e8-626af12ba85a" containerID="fb4e00a76b2e2eb4ed64c98986b511131a2e4fa10c3384f693c9bd331275ac7d" exitCode=0 Jan 23 12:11:56 crc kubenswrapper[4865]: I0123 12:11:56.775491 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlxs4" event={"ID":"68c0d50f-01a6-4e5c-92e8-626af12ba85a","Type":"ContainerDied","Data":"fb4e00a76b2e2eb4ed64c98986b511131a2e4fa10c3384f693c9bd331275ac7d"} Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.281049 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.361297 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-scripts\") pod \"84103afc-63d9-416c-bc51-729cd8c6eeed\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.361425 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27qvx\" (UniqueName: \"kubernetes.io/projected/84103afc-63d9-416c-bc51-729cd8c6eeed-kube-api-access-27qvx\") pod \"84103afc-63d9-416c-bc51-729cd8c6eeed\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.361543 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84103afc-63d9-416c-bc51-729cd8c6eeed-logs\") pod \"84103afc-63d9-416c-bc51-729cd8c6eeed\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.361585 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-config-data\") pod \"84103afc-63d9-416c-bc51-729cd8c6eeed\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.361645 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-combined-ca-bundle\") pod \"84103afc-63d9-416c-bc51-729cd8c6eeed\" (UID: \"84103afc-63d9-416c-bc51-729cd8c6eeed\") " Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.370518 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84103afc-63d9-416c-bc51-729cd8c6eeed-logs" (OuterVolumeSpecName: "logs") pod "84103afc-63d9-416c-bc51-729cd8c6eeed" (UID: "84103afc-63d9-416c-bc51-729cd8c6eeed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.373341 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-scripts" (OuterVolumeSpecName: "scripts") pod "84103afc-63d9-416c-bc51-729cd8c6eeed" (UID: "84103afc-63d9-416c-bc51-729cd8c6eeed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.374961 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84103afc-63d9-416c-bc51-729cd8c6eeed-kube-api-access-27qvx" (OuterVolumeSpecName: "kube-api-access-27qvx") pod "84103afc-63d9-416c-bc51-729cd8c6eeed" (UID: "84103afc-63d9-416c-bc51-729cd8c6eeed"). InnerVolumeSpecName "kube-api-access-27qvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.399096 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84103afc-63d9-416c-bc51-729cd8c6eeed" (UID: "84103afc-63d9-416c-bc51-729cd8c6eeed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.435477 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-config-data" (OuterVolumeSpecName: "config-data") pod "84103afc-63d9-416c-bc51-729cd8c6eeed" (UID: "84103afc-63d9-416c-bc51-729cd8c6eeed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.476693 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27qvx\" (UniqueName: \"kubernetes.io/projected/84103afc-63d9-416c-bc51-729cd8c6eeed-kube-api-access-27qvx\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.476836 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84103afc-63d9-416c-bc51-729cd8c6eeed-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.476906 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.476972 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.477025 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84103afc-63d9-416c-bc51-729cd8c6eeed-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.791962 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d804805-bf09-488a-80cc-ddda0ba1d466","Type":"ContainerStarted","Data":"f7d59b78c7a56ceaeed9f18a8060713ed1873c87b27747307d57459b8efd3040"} Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.819202 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-scpxv" event={"ID":"84103afc-63d9-416c-bc51-729cd8c6eeed","Type":"ContainerDied","Data":"5aa165ea5b3fe8de04952ef8b08ee2e0094ef6c6e77bb3514d04e9d01549bf1f"} Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.819232 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-scpxv" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.819252 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aa165ea5b3fe8de04952ef8b08ee2e0094ef6c6e77bb3514d04e9d01549bf1f" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.826170 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jw9z7" event={"ID":"3e6117d5-9df1-4299-8358-d7235d7847d2","Type":"ContainerStarted","Data":"d540d33b36c5adf562161dadb0bcd930ee1137ee4310220b513fed962a09963d"} Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.853271 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-jw9z7" podStartSLOduration=3.5382663389999998 podStartE2EDuration="52.853237287s" podCreationTimestamp="2026-01-23 12:11:05 +0000 UTC" firstStartedPulling="2026-01-23 12:11:08.053989709 +0000 UTC m=+1112.223061935" lastFinishedPulling="2026-01-23 12:11:57.368960657 +0000 UTC m=+1161.538032883" observedRunningTime="2026-01-23 12:11:57.848630494 +0000 UTC m=+1162.017702720" watchObservedRunningTime="2026-01-23 12:11:57.853237287 +0000 UTC m=+1162.022309503" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.941507 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6fd5fd954-xn5jf"] Jan 23 12:11:57 crc kubenswrapper[4865]: E0123 12:11:57.942180 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="dnsmasq-dns" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.942196 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="dnsmasq-dns" Jan 23 12:11:57 crc kubenswrapper[4865]: E0123 12:11:57.942224 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84103afc-63d9-416c-bc51-729cd8c6eeed" containerName="placement-db-sync" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.942231 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="84103afc-63d9-416c-bc51-729cd8c6eeed" containerName="placement-db-sync" Jan 23 12:11:57 crc kubenswrapper[4865]: E0123 12:11:57.942245 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="init" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.942252 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="init" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.942415 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="84103afc-63d9-416c-bc51-729cd8c6eeed" containerName="placement-db-sync" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.942443 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc82a0b-b3fb-42ea-b23f-a5459a2d00f5" containerName="dnsmasq-dns" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.943381 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.946694 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.947053 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jh6tv" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.947250 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.947388 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.947500 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 12:11:57 crc kubenswrapper[4865]: I0123 12:11:57.993672 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fd5fd954-xn5jf"] Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.096030 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-combined-ca-bundle\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.096095 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-config-data\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.096128 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-public-tls-certs\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.096189 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-scripts\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.096338 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9hj8\" (UniqueName: \"kubernetes.io/projected/5ce3a0ea-6400-4598-b02f-62b52f871e7c-kube-api-access-h9hj8\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.096360 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-internal-tls-certs\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.096388 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce3a0ea-6400-4598-b02f-62b52f871e7c-logs\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.166940 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.197543 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9hj8\" (UniqueName: \"kubernetes.io/projected/5ce3a0ea-6400-4598-b02f-62b52f871e7c-kube-api-access-h9hj8\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.197585 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-internal-tls-certs\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.197631 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce3a0ea-6400-4598-b02f-62b52f871e7c-logs\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.197689 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-combined-ca-bundle\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.197704 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-config-data\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.197725 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-public-tls-certs\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.197748 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-scripts\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.198321 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce3a0ea-6400-4598-b02f-62b52f871e7c-logs\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.205292 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-scripts\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.205572 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-config-data\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.208150 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-internal-tls-certs\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.211931 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-combined-ca-bundle\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.217185 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9hj8\" (UniqueName: \"kubernetes.io/projected/5ce3a0ea-6400-4598-b02f-62b52f871e7c-kube-api-access-h9hj8\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.227284 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ce3a0ea-6400-4598-b02f-62b52f871e7c-public-tls-certs\") pod \"placement-6fd5fd954-xn5jf\" (UID: \"5ce3a0ea-6400-4598-b02f-62b52f871e7c\") " pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.269954 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.302521 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-combined-ca-bundle\") pod \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.305218 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-credential-keys\") pod \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.305545 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-fernet-keys\") pod \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.305579 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-config-data\") pod \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.305659 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvk28\" (UniqueName: \"kubernetes.io/projected/68c0d50f-01a6-4e5c-92e8-626af12ba85a-kube-api-access-rvk28\") pod \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.305697 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-scripts\") pod \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\" (UID: \"68c0d50f-01a6-4e5c-92e8-626af12ba85a\") " Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.311812 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "68c0d50f-01a6-4e5c-92e8-626af12ba85a" (UID: "68c0d50f-01a6-4e5c-92e8-626af12ba85a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.321790 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-scripts" (OuterVolumeSpecName: "scripts") pod "68c0d50f-01a6-4e5c-92e8-626af12ba85a" (UID: "68c0d50f-01a6-4e5c-92e8-626af12ba85a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.323970 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68c0d50f-01a6-4e5c-92e8-626af12ba85a-kube-api-access-rvk28" (OuterVolumeSpecName: "kube-api-access-rvk28") pod "68c0d50f-01a6-4e5c-92e8-626af12ba85a" (UID: "68c0d50f-01a6-4e5c-92e8-626af12ba85a"). InnerVolumeSpecName "kube-api-access-rvk28". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.327760 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "68c0d50f-01a6-4e5c-92e8-626af12ba85a" (UID: "68c0d50f-01a6-4e5c-92e8-626af12ba85a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.351810 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68c0d50f-01a6-4e5c-92e8-626af12ba85a" (UID: "68c0d50f-01a6-4e5c-92e8-626af12ba85a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.405735 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-config-data" (OuterVolumeSpecName: "config-data") pod "68c0d50f-01a6-4e5c-92e8-626af12ba85a" (UID: "68c0d50f-01a6-4e5c-92e8-626af12ba85a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.415754 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.415786 4865 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.415795 4865 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.415803 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.415813 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvk28\" (UniqueName: \"kubernetes.io/projected/68c0d50f-01a6-4e5c-92e8-626af12ba85a-kube-api-access-rvk28\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.415824 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68c0d50f-01a6-4e5c-92e8-626af12ba85a-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.796676 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6fd5fd954-xn5jf"] Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.858954 4865 generic.go:334] "Generic (PLEG): container finished" podID="0155ddd7-e729-44e5-b3c9-e18d88d171ef" containerID="f1d6e4b57940fae2b9d0686009b1bfb552702fbf15d15ea19a28e777ae03b388" exitCode=0 Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.859057 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zntkp" event={"ID":"0155ddd7-e729-44e5-b3c9-e18d88d171ef","Type":"ContainerDied","Data":"f1d6e4b57940fae2b9d0686009b1bfb552702fbf15d15ea19a28e777ae03b388"} Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.868718 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xqdv2" event={"ID":"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90","Type":"ContainerStarted","Data":"b739b8de3e8d33658d22d2bd79d15644861637e873f7619c8abb911df38bffde"} Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.895928 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sxqmn" event={"ID":"afab83a5-8e47-4531-80de-ae69dfd11bd9","Type":"ContainerStarted","Data":"f31df3bdd703f46a94516666fc069364522d00b4d795aa2e1b847e7c2a52a592"} Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.902924 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dlxs4" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.904051 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dlxs4" event={"ID":"68c0d50f-01a6-4e5c-92e8-626af12ba85a","Type":"ContainerDied","Data":"3cd2125223671d9a926ee99d37f5467bccda8df00de2d1f0a150687f14e2e0af"} Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.904099 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd2125223671d9a926ee99d37f5467bccda8df00de2d1f0a150687f14e2e0af" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.910073 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fd5fd954-xn5jf" event={"ID":"5ce3a0ea-6400-4598-b02f-62b52f871e7c","Type":"ContainerStarted","Data":"3a10b85a38d706516f6c18a806157053bf67b53014d3974bbe87e291886b9e77"} Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.980198 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5c55dfc954-p2hjb"] Jan 23 12:11:58 crc kubenswrapper[4865]: E0123 12:11:58.980563 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68c0d50f-01a6-4e5c-92e8-626af12ba85a" containerName="keystone-bootstrap" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.980578 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="68c0d50f-01a6-4e5c-92e8-626af12ba85a" containerName="keystone-bootstrap" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.980786 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="68c0d50f-01a6-4e5c-92e8-626af12ba85a" containerName="keystone-bootstrap" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.981340 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.986589 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.986801 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.986962 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nlns" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.987075 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.987187 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 12:11:58 crc kubenswrapper[4865]: I0123 12:11:58.987498 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.010862 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c55dfc954-p2hjb"] Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.016390 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xqdv2" podStartSLOduration=3.238853954 podStartE2EDuration="54.016365559s" podCreationTimestamp="2026-01-23 12:11:05 +0000 UTC" firstStartedPulling="2026-01-23 12:11:07.460238504 +0000 UTC m=+1111.629310720" lastFinishedPulling="2026-01-23 12:11:58.237750099 +0000 UTC m=+1162.406822325" observedRunningTime="2026-01-23 12:11:58.938741065 +0000 UTC m=+1163.107813291" watchObservedRunningTime="2026-01-23 12:11:59.016365559 +0000 UTC m=+1163.185437785" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.022143 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-sxqmn" podStartSLOduration=4.183783675 podStartE2EDuration="54.022126771s" podCreationTimestamp="2026-01-23 12:11:05 +0000 UTC" firstStartedPulling="2026-01-23 12:11:07.532407105 +0000 UTC m=+1111.701479331" lastFinishedPulling="2026-01-23 12:11:57.370750211 +0000 UTC m=+1161.539822427" observedRunningTime="2026-01-23 12:11:58.968381352 +0000 UTC m=+1163.137453578" watchObservedRunningTime="2026-01-23 12:11:59.022126771 +0000 UTC m=+1163.191198997" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.133302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-credential-keys\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.133678 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-fernet-keys\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.133706 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-scripts\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.133726 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-config-data\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.133742 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-internal-tls-certs\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.133841 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q7kg\" (UniqueName: \"kubernetes.io/projected/214dab4d-f202-402b-bea3-b483dd61f2dd-kube-api-access-4q7kg\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.133905 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-public-tls-certs\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.133948 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-combined-ca-bundle\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.235010 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q7kg\" (UniqueName: \"kubernetes.io/projected/214dab4d-f202-402b-bea3-b483dd61f2dd-kube-api-access-4q7kg\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.235087 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-public-tls-certs\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.235127 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-combined-ca-bundle\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.235209 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-credential-keys\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.235237 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-fernet-keys\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.235261 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-scripts\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.235278 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-config-data\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.235302 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-internal-tls-certs\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.250047 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-internal-tls-certs\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.250414 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-scripts\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.251161 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-fernet-keys\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.251164 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-config-data\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.253916 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-credential-keys\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.268451 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q7kg\" (UniqueName: \"kubernetes.io/projected/214dab4d-f202-402b-bea3-b483dd61f2dd-kube-api-access-4q7kg\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.274578 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-public-tls-certs\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.275848 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/214dab4d-f202-402b-bea3-b483dd61f2dd-combined-ca-bundle\") pod \"keystone-5c55dfc954-p2hjb\" (UID: \"214dab4d-f202-402b-bea3-b483dd61f2dd\") " pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.332395 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.742658 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c55dfc954-p2hjb"] Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.939807 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c55dfc954-p2hjb" event={"ID":"214dab4d-f202-402b-bea3-b483dd61f2dd","Type":"ContainerStarted","Data":"c7ff9e70a3f4024da1245970063603a25e910954e737e264088134258c310501"} Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.949680 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fd5fd954-xn5jf" event={"ID":"5ce3a0ea-6400-4598-b02f-62b52f871e7c","Type":"ContainerStarted","Data":"486e8f9ce6ee2a4776f020bd22c29167a0080fd375738b9088724d3567476768"} Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.949740 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6fd5fd954-xn5jf" event={"ID":"5ce3a0ea-6400-4598-b02f-62b52f871e7c","Type":"ContainerStarted","Data":"1e1eb4e4205ce9c69d51c9bb64f04c8b95640447198dbb6c8f0922517f8ba1cd"} Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.949781 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.949793 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:11:59 crc kubenswrapper[4865]: I0123 12:11:59.978656 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6fd5fd954-xn5jf" podStartSLOduration=2.978640824 podStartE2EDuration="2.978640824s" podCreationTimestamp="2026-01-23 12:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:11:59.976768238 +0000 UTC m=+1164.145840484" watchObservedRunningTime="2026-01-23 12:11:59.978640824 +0000 UTC m=+1164.147713050" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.446714 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zntkp" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.575760 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-config\") pod \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.576001 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-combined-ca-bundle\") pod \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.576074 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghtcd\" (UniqueName: \"kubernetes.io/projected/0155ddd7-e729-44e5-b3c9-e18d88d171ef-kube-api-access-ghtcd\") pod \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\" (UID: \"0155ddd7-e729-44e5-b3c9-e18d88d171ef\") " Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.614058 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0155ddd7-e729-44e5-b3c9-e18d88d171ef-kube-api-access-ghtcd" (OuterVolumeSpecName: "kube-api-access-ghtcd") pod "0155ddd7-e729-44e5-b3c9-e18d88d171ef" (UID: "0155ddd7-e729-44e5-b3c9-e18d88d171ef"). InnerVolumeSpecName "kube-api-access-ghtcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.619823 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-config" (OuterVolumeSpecName: "config") pod "0155ddd7-e729-44e5-b3c9-e18d88d171ef" (UID: "0155ddd7-e729-44e5-b3c9-e18d88d171ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.641193 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0155ddd7-e729-44e5-b3c9-e18d88d171ef" (UID: "0155ddd7-e729-44e5-b3c9-e18d88d171ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.692462 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghtcd\" (UniqueName: \"kubernetes.io/projected/0155ddd7-e729-44e5-b3c9-e18d88d171ef-kube-api-access-ghtcd\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.692499 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.692509 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0155ddd7-e729-44e5-b3c9-e18d88d171ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.958726 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zntkp" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.958731 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zntkp" event={"ID":"0155ddd7-e729-44e5-b3c9-e18d88d171ef","Type":"ContainerDied","Data":"1b24ae4757e3bfd0f960a1c1beaaafac8ebc911c06831808e506c463d5bd1726"} Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.958868 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b24ae4757e3bfd0f960a1c1beaaafac8ebc911c06831808e506c463d5bd1726" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.963374 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c55dfc954-p2hjb" event={"ID":"214dab4d-f202-402b-bea3-b483dd61f2dd","Type":"ContainerStarted","Data":"f9d7575d89a1d68faadedffadd33d8769b782c3c4dda92f3b48a3ae8b450e10c"} Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.963458 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:12:00 crc kubenswrapper[4865]: I0123 12:12:00.998175 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5c55dfc954-p2hjb" podStartSLOduration=2.998153823 podStartE2EDuration="2.998153823s" podCreationTimestamp="2026-01-23 12:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:12:00.990939656 +0000 UTC m=+1165.160011882" watchObservedRunningTime="2026-01-23 12:12:00.998153823 +0000 UTC m=+1165.167226049" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.255904 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56bbf5954f-z4ncb"] Jan 23 12:12:01 crc kubenswrapper[4865]: E0123 12:12:01.256305 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0155ddd7-e729-44e5-b3c9-e18d88d171ef" containerName="neutron-db-sync" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.256322 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0155ddd7-e729-44e5-b3c9-e18d88d171ef" containerName="neutron-db-sync" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.256513 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0155ddd7-e729-44e5-b3c9-e18d88d171ef" containerName="neutron-db-sync" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.257371 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.312360 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bbf5954f-z4ncb"] Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.379640 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-54757d9768-tnwjg"] Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.381092 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.389182 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54757d9768-tnwjg"] Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.390736 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.390848 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.391380 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.391459 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qjk5k" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.412153 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-swift-storage-0\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.412295 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-sb\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.412348 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-nb\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.412430 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-config\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.412522 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbbgh\" (UniqueName: \"kubernetes.io/projected/583bc182-b987-483a-b691-23853d93edd4-kube-api-access-jbbgh\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.422259 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-svc\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.523784 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-config\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.523838 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-ovndb-tls-certs\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.523869 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbbgh\" (UniqueName: \"kubernetes.io/projected/583bc182-b987-483a-b691-23853d93edd4-kube-api-access-jbbgh\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.523889 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-svc\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.523916 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-combined-ca-bundle\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.523935 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-config\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.523953 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-swift-storage-0\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.523979 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-httpd-config\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.524019 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-sb\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.524042 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-nb\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.524060 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tsb\" (UniqueName: \"kubernetes.io/projected/ad8dd067-54ba-4a76-a30e-7542369c3b1d-kube-api-access-r4tsb\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.525039 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-config\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.525838 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-svc\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.526435 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-swift-storage-0\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.527012 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-sb\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.527683 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-nb\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.549816 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbbgh\" (UniqueName: \"kubernetes.io/projected/583bc182-b987-483a-b691-23853d93edd4-kube-api-access-jbbgh\") pod \"dnsmasq-dns-56bbf5954f-z4ncb\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.592412 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.625112 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-ovndb-tls-certs\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.625206 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-combined-ca-bundle\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.625236 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-config\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.625274 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-httpd-config\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.625353 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4tsb\" (UniqueName: \"kubernetes.io/projected/ad8dd067-54ba-4a76-a30e-7542369c3b1d-kube-api-access-r4tsb\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.630481 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-config\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.633965 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-ovndb-tls-certs\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.634640 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-httpd-config\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.637362 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-combined-ca-bundle\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.648995 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4tsb\" (UniqueName: \"kubernetes.io/projected/ad8dd067-54ba-4a76-a30e-7542369c3b1d-kube-api-access-r4tsb\") pod \"neutron-54757d9768-tnwjg\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:01 crc kubenswrapper[4865]: I0123 12:12:01.735003 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:02 crc kubenswrapper[4865]: I0123 12:12:02.240550 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56bbf5954f-z4ncb"] Jan 23 12:12:02 crc kubenswrapper[4865]: W0123 12:12:02.526047 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad8dd067_54ba_4a76_a30e_7542369c3b1d.slice/crio-5d17a3f84ec445e1856bf147e5ef0e491987cd4c6d7ad67256194a2f888c6be9 WatchSource:0}: Error finding container 5d17a3f84ec445e1856bf147e5ef0e491987cd4c6d7ad67256194a2f888c6be9: Status 404 returned error can't find the container with id 5d17a3f84ec445e1856bf147e5ef0e491987cd4c6d7ad67256194a2f888c6be9 Jan 23 12:12:02 crc kubenswrapper[4865]: I0123 12:12:02.546284 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-54757d9768-tnwjg"] Jan 23 12:12:02 crc kubenswrapper[4865]: I0123 12:12:02.988010 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54757d9768-tnwjg" event={"ID":"ad8dd067-54ba-4a76-a30e-7542369c3b1d","Type":"ContainerStarted","Data":"5d17a3f84ec445e1856bf147e5ef0e491987cd4c6d7ad67256194a2f888c6be9"} Jan 23 12:12:02 crc kubenswrapper[4865]: I0123 12:12:02.989931 4865 generic.go:334] "Generic (PLEG): container finished" podID="583bc182-b987-483a-b691-23853d93edd4" containerID="938e9fb8404ba18cb4eb29bb6f83fb8e7f6d795444574a55b09c3df91c2c463c" exitCode=0 Jan 23 12:12:02 crc kubenswrapper[4865]: I0123 12:12:02.990012 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" event={"ID":"583bc182-b987-483a-b691-23853d93edd4","Type":"ContainerDied","Data":"938e9fb8404ba18cb4eb29bb6f83fb8e7f6d795444574a55b09c3df91c2c463c"} Jan 23 12:12:02 crc kubenswrapper[4865]: I0123 12:12:02.990093 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" event={"ID":"583bc182-b987-483a-b691-23853d93edd4","Type":"ContainerStarted","Data":"cc0933b0aad649a8950dca49a10c91504b11eaf9b6002cac720fd0ef1d3cccae"} Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:03.999941 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54757d9768-tnwjg" event={"ID":"ad8dd067-54ba-4a76-a30e-7542369c3b1d","Type":"ContainerStarted","Data":"e5c6758c5de86287520817f59d637a052e092caf231e0407c2f2724b9599ea14"} Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.000449 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54757d9768-tnwjg" event={"ID":"ad8dd067-54ba-4a76-a30e-7542369c3b1d","Type":"ContainerStarted","Data":"9bef326e5b4a8a08fec7b1a92b5401b43f385f19fcf71f9f6fbf90a6eb10027d"} Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.000469 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.005449 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" event={"ID":"583bc182-b987-483a-b691-23853d93edd4","Type":"ContainerStarted","Data":"55e91396818145f0223a0d6bf6c9d2fddd6184292ba2d88e360e4c315cc1530e"} Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.005633 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.024876 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-54757d9768-tnwjg" podStartSLOduration=3.02485351 podStartE2EDuration="3.02485351s" podCreationTimestamp="2026-01-23 12:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:12:04.017879459 +0000 UTC m=+1168.186951685" watchObservedRunningTime="2026-01-23 12:12:04.02485351 +0000 UTC m=+1168.193925736" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.050810 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" podStartSLOduration=3.050793236 podStartE2EDuration="3.050793236s" podCreationTimestamp="2026-01-23 12:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:12:04.046685715 +0000 UTC m=+1168.215757941" watchObservedRunningTime="2026-01-23 12:12:04.050793236 +0000 UTC m=+1168.219865462" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.094897 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.163380 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b9c4785f9-kx698"] Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.184872 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.188076 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b9c4785f9-kx698"] Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.188421 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.189724 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.293892 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-ovndb-tls-certs\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.293959 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-public-tls-certs\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.294013 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-combined-ca-bundle\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.294219 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-config\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.294265 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-httpd-config\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.294293 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvvm\" (UniqueName: \"kubernetes.io/projected/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-kube-api-access-pvvvm\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.294337 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-internal-tls-certs\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.356765 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.396106 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-internal-tls-certs\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.396222 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-ovndb-tls-certs\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.396255 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-public-tls-certs\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.397250 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-combined-ca-bundle\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.397313 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-config\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.397344 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-httpd-config\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.397366 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvvvm\" (UniqueName: \"kubernetes.io/projected/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-kube-api-access-pvvvm\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.404978 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-httpd-config\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.411345 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-ovndb-tls-certs\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.411828 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-public-tls-certs\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.414654 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-config\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.414995 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-internal-tls-certs\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.415289 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvvvm\" (UniqueName: \"kubernetes.io/projected/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-kube-api-access-pvvvm\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.417904 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6bc30aa-5b02-4c6b-ac0e-43799b7929dd-combined-ca-bundle\") pod \"neutron-b9c4785f9-kx698\" (UID: \"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd\") " pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:04 crc kubenswrapper[4865]: I0123 12:12:04.547686 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:05 crc kubenswrapper[4865]: I0123 12:12:05.263227 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b9c4785f9-kx698"] Jan 23 12:12:09 crc kubenswrapper[4865]: I0123 12:12:09.055516 4865 generic.go:334] "Generic (PLEG): container finished" podID="1dbb30bd-db3b-48a2-96dd-6193b6a7ab90" containerID="b739b8de3e8d33658d22d2bd79d15644861637e873f7619c8abb911df38bffde" exitCode=0 Jan 23 12:12:09 crc kubenswrapper[4865]: I0123 12:12:09.055744 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xqdv2" event={"ID":"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90","Type":"ContainerDied","Data":"b739b8de3e8d33658d22d2bd79d15644861637e873f7619c8abb911df38bffde"} Jan 23 12:12:10 crc kubenswrapper[4865]: I0123 12:12:10.063873 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b9c4785f9-kx698" event={"ID":"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd","Type":"ContainerStarted","Data":"55a27e496405fa97740cb04d2defb50e0cd34e1a3dfa25efdb3ff3afad1e902c"} Jan 23 12:12:10 crc kubenswrapper[4865]: I0123 12:12:10.854406 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:12:10 crc kubenswrapper[4865]: I0123 12:12:10.926168 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-db-sync-config-data\") pod \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " Jan 23 12:12:10 crc kubenswrapper[4865]: I0123 12:12:10.926325 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-combined-ca-bundle\") pod \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " Jan 23 12:12:10 crc kubenswrapper[4865]: I0123 12:12:10.926407 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-277q4\" (UniqueName: \"kubernetes.io/projected/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-kube-api-access-277q4\") pod \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\" (UID: \"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90\") " Jan 23 12:12:10 crc kubenswrapper[4865]: I0123 12:12:10.936071 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-kube-api-access-277q4" (OuterVolumeSpecName: "kube-api-access-277q4") pod "1dbb30bd-db3b-48a2-96dd-6193b6a7ab90" (UID: "1dbb30bd-db3b-48a2-96dd-6193b6a7ab90"). InnerVolumeSpecName "kube-api-access-277q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:10 crc kubenswrapper[4865]: I0123 12:12:10.942691 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1dbb30bd-db3b-48a2-96dd-6193b6a7ab90" (UID: "1dbb30bd-db3b-48a2-96dd-6193b6a7ab90"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:10 crc kubenswrapper[4865]: I0123 12:12:10.993787 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dbb30bd-db3b-48a2-96dd-6193b6a7ab90" (UID: "1dbb30bd-db3b-48a2-96dd-6193b6a7ab90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.028178 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.028211 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-277q4\" (UniqueName: \"kubernetes.io/projected/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-kube-api-access-277q4\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.028223 4865 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.082436 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xqdv2" event={"ID":"1dbb30bd-db3b-48a2-96dd-6193b6a7ab90","Type":"ContainerDied","Data":"d220761f2954b3afa39f9fe3d71a1a8b05dfe0118c7d34c8b347174851fe76c9"} Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.082480 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d220761f2954b3afa39f9fe3d71a1a8b05dfe0118c7d34c8b347174851fe76c9" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.082547 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xqdv2" Jan 23 12:12:11 crc kubenswrapper[4865]: E0123 12:12:11.158895 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.357445 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-98cdcc84f-cr2jf"] Jan 23 12:12:11 crc kubenswrapper[4865]: E0123 12:12:11.370737 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbb30bd-db3b-48a2-96dd-6193b6a7ab90" containerName="barbican-db-sync" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.370778 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbb30bd-db3b-48a2-96dd-6193b6a7ab90" containerName="barbican-db-sync" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.371068 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dbb30bd-db3b-48a2-96dd-6193b6a7ab90" containerName="barbican-db-sync" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.372299 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.384938 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-b4zk8" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.385278 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.385469 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.398990 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-969599b78-gpdqv"] Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.400494 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.405922 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.441714 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb23e044-f1b3-4114-94af-7aa272f670a0-logs\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.441770 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396993e-12f6-41c3-9f09-b501bf6fb29b-combined-ca-bundle\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.441805 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxkxg\" (UniqueName: \"kubernetes.io/projected/cb23e044-f1b3-4114-94af-7aa272f670a0-kube-api-access-vxkxg\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.441830 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb23e044-f1b3-4114-94af-7aa272f670a0-combined-ca-bundle\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.441882 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb23e044-f1b3-4114-94af-7aa272f670a0-config-data\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.441901 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h8cq\" (UniqueName: \"kubernetes.io/projected/2396993e-12f6-41c3-9f09-b501bf6fb29b-kube-api-access-2h8cq\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.441918 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2396993e-12f6-41c3-9f09-b501bf6fb29b-logs\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.441949 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2396993e-12f6-41c3-9f09-b501bf6fb29b-config-data\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.441979 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2396993e-12f6-41c3-9f09-b501bf6fb29b-config-data-custom\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.442000 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb23e044-f1b3-4114-94af-7aa272f670a0-config-data-custom\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.463193 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-98cdcc84f-cr2jf"] Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.463260 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-969599b78-gpdqv"] Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.541484 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbf5954f-z4ncb"] Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.545530 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb23e044-f1b3-4114-94af-7aa272f670a0-logs\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.545589 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396993e-12f6-41c3-9f09-b501bf6fb29b-combined-ca-bundle\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.545641 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxkxg\" (UniqueName: \"kubernetes.io/projected/cb23e044-f1b3-4114-94af-7aa272f670a0-kube-api-access-vxkxg\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.545665 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb23e044-f1b3-4114-94af-7aa272f670a0-combined-ca-bundle\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.545719 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb23e044-f1b3-4114-94af-7aa272f670a0-config-data\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.545741 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h8cq\" (UniqueName: \"kubernetes.io/projected/2396993e-12f6-41c3-9f09-b501bf6fb29b-kube-api-access-2h8cq\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.545758 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2396993e-12f6-41c3-9f09-b501bf6fb29b-logs\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.545787 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2396993e-12f6-41c3-9f09-b501bf6fb29b-config-data\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.545818 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2396993e-12f6-41c3-9f09-b501bf6fb29b-config-data-custom\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.545837 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb23e044-f1b3-4114-94af-7aa272f670a0-config-data-custom\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.546069 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb23e044-f1b3-4114-94af-7aa272f670a0-logs\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.546570 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" podUID="583bc182-b987-483a-b691-23853d93edd4" containerName="dnsmasq-dns" containerID="cri-o://55e91396818145f0223a0d6bf6c9d2fddd6184292ba2d88e360e4c315cc1530e" gracePeriod=10 Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.547691 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.546904 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2396993e-12f6-41c3-9f09-b501bf6fb29b-logs\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.569084 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb23e044-f1b3-4114-94af-7aa272f670a0-config-data\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.578397 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2396993e-12f6-41c3-9f09-b501bf6fb29b-config-data-custom\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.589197 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396993e-12f6-41c3-9f09-b501bf6fb29b-combined-ca-bundle\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.592348 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb23e044-f1b3-4114-94af-7aa272f670a0-combined-ca-bundle\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.592521 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb23e044-f1b3-4114-94af-7aa272f670a0-config-data-custom\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.592831 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxkxg\" (UniqueName: \"kubernetes.io/projected/cb23e044-f1b3-4114-94af-7aa272f670a0-kube-api-access-vxkxg\") pod \"barbican-worker-98cdcc84f-cr2jf\" (UID: \"cb23e044-f1b3-4114-94af-7aa272f670a0\") " pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.594732 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2396993e-12f6-41c3-9f09-b501bf6fb29b-config-data\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.599152 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h8cq\" (UniqueName: \"kubernetes.io/projected/2396993e-12f6-41c3-9f09-b501bf6fb29b-kube-api-access-2h8cq\") pod \"barbican-keystone-listener-969599b78-gpdqv\" (UID: \"2396993e-12f6-41c3-9f09-b501bf6fb29b\") " pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.623020 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bcf7d6c9-7lmbs"] Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.629460 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.674649 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bcf7d6c9-7lmbs"] Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.754072 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-98cdcc84f-cr2jf" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.754647 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-nb\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.754739 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-sb\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.754768 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-swift-storage-0\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.754788 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpwk\" (UniqueName: \"kubernetes.io/projected/b330732a-65fd-4fde-bf81-fce0b551c99e-kube-api-access-zzpwk\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.754849 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-svc\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.754888 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-config\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.770214 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-969599b78-gpdqv" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.779267 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7f5946d9d4-f849n"] Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.780777 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.791901 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.806371 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f5946d9d4-f849n"] Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863690 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-logs\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863737 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-sb\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863762 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-swift-storage-0\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863783 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863805 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpwk\" (UniqueName: \"kubernetes.io/projected/b330732a-65fd-4fde-bf81-fce0b551c99e-kube-api-access-zzpwk\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863842 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data-custom\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863877 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-combined-ca-bundle\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863901 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-svc\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863939 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-config\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863973 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwq2s\" (UniqueName: \"kubernetes.io/projected/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-kube-api-access-qwq2s\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.863992 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-nb\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.864875 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-nb\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.865368 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-sb\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.867045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-config\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.867611 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-svc\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.869351 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-swift-storage-0\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:11 crc kubenswrapper[4865]: I0123 12:12:11.905396 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpwk\" (UniqueName: \"kubernetes.io/projected/b330732a-65fd-4fde-bf81-fce0b551c99e-kube-api-access-zzpwk\") pod \"dnsmasq-dns-7bcf7d6c9-7lmbs\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.019794 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwq2s\" (UniqueName: \"kubernetes.io/projected/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-kube-api-access-qwq2s\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.019890 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-logs\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.019958 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.020018 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data-custom\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.020071 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-combined-ca-bundle\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.021107 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-logs\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.032684 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-combined-ca-bundle\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.035729 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.038881 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data-custom\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.080295 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwq2s\" (UniqueName: \"kubernetes.io/projected/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-kube-api-access-qwq2s\") pod \"barbican-api-7f5946d9d4-f849n\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.095131 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.131213 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.234885 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d804805-bf09-488a-80cc-ddda0ba1d466","Type":"ContainerStarted","Data":"874e2e00a4d331d9331e4759c7fcbaac07a02b9a6dff1713f073e276d44a4530"} Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.235099 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="ceilometer-notification-agent" containerID="cri-o://a1ba41f3314b057b5cae526ec0277f2a4d1fc8878115e20cde84aefec1e0fa98" gracePeriod=30 Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.235622 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.235960 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="proxy-httpd" containerID="cri-o://874e2e00a4d331d9331e4759c7fcbaac07a02b9a6dff1713f073e276d44a4530" gracePeriod=30 Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.236099 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="sg-core" containerID="cri-o://f7d59b78c7a56ceaeed9f18a8060713ed1873c87b27747307d57459b8efd3040" gracePeriod=30 Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.270848 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b9c4785f9-kx698" event={"ID":"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd","Type":"ContainerStarted","Data":"3984dc591c32d769473433dc1bc70c90ae9bb9ce986851a1b4e50cff1983dac0"} Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.271106 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b9c4785f9-kx698" event={"ID":"a6bc30aa-5b02-4c6b-ac0e-43799b7929dd","Type":"ContainerStarted","Data":"a158ff17c238f88dc668854327b19dab2db164e7d417edaec3d9138d6b835394"} Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.271671 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.400970 4865 generic.go:334] "Generic (PLEG): container finished" podID="583bc182-b987-483a-b691-23853d93edd4" containerID="55e91396818145f0223a0d6bf6c9d2fddd6184292ba2d88e360e4c315cc1530e" exitCode=0 Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.401022 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" event={"ID":"583bc182-b987-483a-b691-23853d93edd4","Type":"ContainerDied","Data":"55e91396818145f0223a0d6bf6c9d2fddd6184292ba2d88e360e4c315cc1530e"} Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.404127 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-b9c4785f9-kx698" podStartSLOduration=8.404108028 podStartE2EDuration="8.404108028s" podCreationTimestamp="2026-01-23 12:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:12:12.313858556 +0000 UTC m=+1176.482930772" watchObservedRunningTime="2026-01-23 12:12:12.404108028 +0000 UTC m=+1176.573180254" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.604036 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.787411 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-config\") pod \"583bc182-b987-483a-b691-23853d93edd4\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.787451 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbbgh\" (UniqueName: \"kubernetes.io/projected/583bc182-b987-483a-b691-23853d93edd4-kube-api-access-jbbgh\") pod \"583bc182-b987-483a-b691-23853d93edd4\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.787473 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-swift-storage-0\") pod \"583bc182-b987-483a-b691-23853d93edd4\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.787488 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-nb\") pod \"583bc182-b987-483a-b691-23853d93edd4\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.787542 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-svc\") pod \"583bc182-b987-483a-b691-23853d93edd4\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.787624 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-sb\") pod \"583bc182-b987-483a-b691-23853d93edd4\" (UID: \"583bc182-b987-483a-b691-23853d93edd4\") " Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.810989 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583bc182-b987-483a-b691-23853d93edd4-kube-api-access-jbbgh" (OuterVolumeSpecName: "kube-api-access-jbbgh") pod "583bc182-b987-483a-b691-23853d93edd4" (UID: "583bc182-b987-483a-b691-23853d93edd4"). InnerVolumeSpecName "kube-api-access-jbbgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.851254 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-969599b78-gpdqv"] Jan 23 12:12:12 crc kubenswrapper[4865]: I0123 12:12:12.946145 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbbgh\" (UniqueName: \"kubernetes.io/projected/583bc182-b987-483a-b691-23853d93edd4-kube-api-access-jbbgh\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.015463 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bcf7d6c9-7lmbs"] Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.122050 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "583bc182-b987-483a-b691-23853d93edd4" (UID: "583bc182-b987-483a-b691-23853d93edd4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.136202 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "583bc182-b987-483a-b691-23853d93edd4" (UID: "583bc182-b987-483a-b691-23853d93edd4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.141132 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "583bc182-b987-483a-b691-23853d93edd4" (UID: "583bc182-b987-483a-b691-23853d93edd4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.153044 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "583bc182-b987-483a-b691-23853d93edd4" (UID: "583bc182-b987-483a-b691-23853d93edd4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.162105 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-config" (OuterVolumeSpecName: "config") pod "583bc182-b987-483a-b691-23853d93edd4" (UID: "583bc182-b987-483a-b691-23853d93edd4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.171044 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.171072 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.171081 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.171092 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.171101 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/583bc182-b987-483a-b691-23853d93edd4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.233304 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-98cdcc84f-cr2jf"] Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.407486 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7f5946d9d4-f849n"] Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.419516 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" event={"ID":"b330732a-65fd-4fde-bf81-fce0b551c99e","Type":"ContainerStarted","Data":"123df6934fc9681ec6611fda6e6f9e24190607cee39cc7f93ba775d8266a79ef"} Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.441884 4865 generic.go:334] "Generic (PLEG): container finished" podID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerID="f7d59b78c7a56ceaeed9f18a8060713ed1873c87b27747307d57459b8efd3040" exitCode=2 Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.441961 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d804805-bf09-488a-80cc-ddda0ba1d466","Type":"ContainerDied","Data":"f7d59b78c7a56ceaeed9f18a8060713ed1873c87b27747307d57459b8efd3040"} Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.450502 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" event={"ID":"583bc182-b987-483a-b691-23853d93edd4","Type":"ContainerDied","Data":"cc0933b0aad649a8950dca49a10c91504b11eaf9b6002cac720fd0ef1d3cccae"} Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.450545 4865 scope.go:117] "RemoveContainer" containerID="55e91396818145f0223a0d6bf6c9d2fddd6184292ba2d88e360e4c315cc1530e" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.450849 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56bbf5954f-z4ncb" Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.490889 4865 generic.go:334] "Generic (PLEG): container finished" podID="3e6117d5-9df1-4299-8358-d7235d7847d2" containerID="d540d33b36c5adf562161dadb0bcd930ee1137ee4310220b513fed962a09963d" exitCode=0 Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.491125 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jw9z7" event={"ID":"3e6117d5-9df1-4299-8358-d7235d7847d2","Type":"ContainerDied","Data":"d540d33b36c5adf562161dadb0bcd930ee1137ee4310220b513fed962a09963d"} Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.502099 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-98cdcc84f-cr2jf" event={"ID":"cb23e044-f1b3-4114-94af-7aa272f670a0","Type":"ContainerStarted","Data":"3bcc4e42f62f5b2e61c2548af980de9984d9ea2e52bac223e9f335a5ddaf0a9e"} Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.512777 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56bbf5954f-z4ncb"] Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.519724 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-969599b78-gpdqv" event={"ID":"2396993e-12f6-41c3-9f09-b501bf6fb29b","Type":"ContainerStarted","Data":"92dcc677979b96b5b937291499e07cbf793b834adc9184d5a36a0d7c0e443754"} Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.536653 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56bbf5954f-z4ncb"] Jan 23 12:12:13 crc kubenswrapper[4865]: I0123 12:12:13.574217 4865 scope.go:117] "RemoveContainer" containerID="938e9fb8404ba18cb4eb29bb6f83fb8e7f6d795444574a55b09c3df91c2c463c" Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.090074 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.128936 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="583bc182-b987-483a-b691-23853d93edd4" path="/var/lib/kubelet/pods/583bc182-b987-483a-b691-23853d93edd4/volumes" Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.355107 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.539293 4865 generic.go:334] "Generic (PLEG): container finished" podID="afab83a5-8e47-4531-80de-ae69dfd11bd9" containerID="f31df3bdd703f46a94516666fc069364522d00b4d795aa2e1b847e7c2a52a592" exitCode=0 Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.539365 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sxqmn" event={"ID":"afab83a5-8e47-4531-80de-ae69dfd11bd9","Type":"ContainerDied","Data":"f31df3bdd703f46a94516666fc069364522d00b4d795aa2e1b847e7c2a52a592"} Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.566928 4865 generic.go:334] "Generic (PLEG): container finished" podID="b330732a-65fd-4fde-bf81-fce0b551c99e" containerID="e233be4c03d4b314dacff0df454478861e5459d40d4b1c8941dab72e2be971f4" exitCode=0 Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.567026 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" event={"ID":"b330732a-65fd-4fde-bf81-fce0b551c99e","Type":"ContainerDied","Data":"e233be4c03d4b314dacff0df454478861e5459d40d4b1c8941dab72e2be971f4"} Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.605685 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5946d9d4-f849n" event={"ID":"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6","Type":"ContainerStarted","Data":"7ed173f4be5ac5a3351c319876ccac9b2a22b1038a98c603ffb24312ae6d635d"} Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.605734 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5946d9d4-f849n" event={"ID":"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6","Type":"ContainerStarted","Data":"8ca271485ab203ab4686b756c0caf3b9ab346a773344a13b2f2784b3e6f037f8"} Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.605769 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5946d9d4-f849n" event={"ID":"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6","Type":"ContainerStarted","Data":"c16ce7751d86f51b7751f6dd5dadfe2b9f99b9cf687bdb99a3eae2d9327007a5"} Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.605810 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.605864 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.620771 4865 generic.go:334] "Generic (PLEG): container finished" podID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerID="874e2e00a4d331d9331e4759c7fcbaac07a02b9a6dff1713f073e276d44a4530" exitCode=0 Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.620833 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d804805-bf09-488a-80cc-ddda0ba1d466","Type":"ContainerDied","Data":"874e2e00a4d331d9331e4759c7fcbaac07a02b9a6dff1713f073e276d44a4530"} Jan 23 12:12:14 crc kubenswrapper[4865]: I0123 12:12:14.644213 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7f5946d9d4-f849n" podStartSLOduration=3.644187693 podStartE2EDuration="3.644187693s" podCreationTimestamp="2026-01-23 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:12:14.635429871 +0000 UTC m=+1178.804502107" watchObservedRunningTime="2026-01-23 12:12:14.644187693 +0000 UTC m=+1178.813259919" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.306170 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jw9z7" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.340781 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-config-data\") pod \"3e6117d5-9df1-4299-8358-d7235d7847d2\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.340929 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltjwl\" (UniqueName: \"kubernetes.io/projected/3e6117d5-9df1-4299-8358-d7235d7847d2-kube-api-access-ltjwl\") pod \"3e6117d5-9df1-4299-8358-d7235d7847d2\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.341000 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-combined-ca-bundle\") pod \"3e6117d5-9df1-4299-8358-d7235d7847d2\" (UID: \"3e6117d5-9df1-4299-8358-d7235d7847d2\") " Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.361150 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6117d5-9df1-4299-8358-d7235d7847d2-kube-api-access-ltjwl" (OuterVolumeSpecName: "kube-api-access-ltjwl") pod "3e6117d5-9df1-4299-8358-d7235d7847d2" (UID: "3e6117d5-9df1-4299-8358-d7235d7847d2"). InnerVolumeSpecName "kube-api-access-ltjwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.429693 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e6117d5-9df1-4299-8358-d7235d7847d2" (UID: "3e6117d5-9df1-4299-8358-d7235d7847d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.442694 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltjwl\" (UniqueName: \"kubernetes.io/projected/3e6117d5-9df1-4299-8358-d7235d7847d2-kube-api-access-ltjwl\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.442730 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.535865 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-config-data" (OuterVolumeSpecName: "config-data") pod "3e6117d5-9df1-4299-8358-d7235d7847d2" (UID: "3e6117d5-9df1-4299-8358-d7235d7847d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.540276 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75dd7565cd-4skz5"] Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.544707 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e6117d5-9df1-4299-8358-d7235d7847d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:15 crc kubenswrapper[4865]: E0123 12:12:15.545014 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6117d5-9df1-4299-8358-d7235d7847d2" containerName="heat-db-sync" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.545047 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6117d5-9df1-4299-8358-d7235d7847d2" containerName="heat-db-sync" Jan 23 12:12:15 crc kubenswrapper[4865]: E0123 12:12:15.545076 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="583bc182-b987-483a-b691-23853d93edd4" containerName="dnsmasq-dns" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.545084 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="583bc182-b987-483a-b691-23853d93edd4" containerName="dnsmasq-dns" Jan 23 12:12:15 crc kubenswrapper[4865]: E0123 12:12:15.545093 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="583bc182-b987-483a-b691-23853d93edd4" containerName="init" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.545100 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="583bc182-b987-483a-b691-23853d93edd4" containerName="init" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.545270 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6117d5-9df1-4299-8358-d7235d7847d2" containerName="heat-db-sync" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.545281 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="583bc182-b987-483a-b691-23853d93edd4" containerName="dnsmasq-dns" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.546230 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.551281 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.551438 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.560581 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75dd7565cd-4skz5"] Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.646712 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-config-data-custom\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.646751 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-combined-ca-bundle\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.646846 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cksp\" (UniqueName: \"kubernetes.io/projected/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-kube-api-access-7cksp\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.646865 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-logs\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.646887 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-public-tls-certs\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.646910 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-config-data\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.646962 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-internal-tls-certs\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.658360 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jw9z7" event={"ID":"3e6117d5-9df1-4299-8358-d7235d7847d2","Type":"ContainerDied","Data":"8dece95a715de16ee0a64a0c418413bb402765396e9f69795de1590181a1d1d1"} Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.658400 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dece95a715de16ee0a64a0c418413bb402765396e9f69795de1590181a1d1d1" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.658455 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jw9z7" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.662949 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" event={"ID":"b330732a-65fd-4fde-bf81-fce0b551c99e","Type":"ContainerStarted","Data":"2b4eac25c424bff9536bae6bf8f55f55509a215b1c7cceccae21ec5446f0182a"} Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.664094 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.674843 4865 generic.go:334] "Generic (PLEG): container finished" podID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerID="a1ba41f3314b057b5cae526ec0277f2a4d1fc8878115e20cde84aefec1e0fa98" exitCode=0 Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.675703 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d804805-bf09-488a-80cc-ddda0ba1d466","Type":"ContainerDied","Data":"a1ba41f3314b057b5cae526ec0277f2a4d1fc8878115e20cde84aefec1e0fa98"} Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.692486 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" podStartSLOduration=4.692469959 podStartE2EDuration="4.692469959s" podCreationTimestamp="2026-01-23 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:12:15.690989063 +0000 UTC m=+1179.860061289" watchObservedRunningTime="2026-01-23 12:12:15.692469959 +0000 UTC m=+1179.861542185" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.749652 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-combined-ca-bundle\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.749807 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cksp\" (UniqueName: \"kubernetes.io/projected/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-kube-api-access-7cksp\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.749828 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-logs\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.749850 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-public-tls-certs\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.749890 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-config-data\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.749969 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-internal-tls-certs\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.749993 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-config-data-custom\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.754273 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-logs\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.756164 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-config-data\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.756691 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-combined-ca-bundle\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.758137 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-public-tls-certs\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.760056 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-internal-tls-certs\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.760535 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-config-data-custom\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.778292 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cksp\" (UniqueName: \"kubernetes.io/projected/80922e66-3668-4bf5-8bdf-ce6c9621fcd5-kube-api-access-7cksp\") pod \"barbican-api-75dd7565cd-4skz5\" (UID: \"80922e66-3668-4bf5-8bdf-ce6c9621fcd5\") " pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:15 crc kubenswrapper[4865]: I0123 12:12:15.876914 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:16 crc kubenswrapper[4865]: I0123 12:12:16.684421 4865 generic.go:334] "Generic (PLEG): container finished" podID="f0a0ac97-b923-44ce-8b90-31965497a560" containerID="32cfc548e09beda78485f718018e8e94ae7cb9906ccb77f8f7d9c77928c1d3ed" exitCode=137 Jan 23 12:12:16 crc kubenswrapper[4865]: I0123 12:12:16.684793 4865 generic.go:334] "Generic (PLEG): container finished" podID="f0a0ac97-b923-44ce-8b90-31965497a560" containerID="e5aadf1e3aa11acb349489c22739155115814b4e9ef4e90e3701e7d6018b2a69" exitCode=137 Jan 23 12:12:16 crc kubenswrapper[4865]: I0123 12:12:16.684503 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7999dc6947-8xp26" event={"ID":"f0a0ac97-b923-44ce-8b90-31965497a560","Type":"ContainerDied","Data":"32cfc548e09beda78485f718018e8e94ae7cb9906ccb77f8f7d9c77928c1d3ed"} Jan 23 12:12:16 crc kubenswrapper[4865]: I0123 12:12:16.685080 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7999dc6947-8xp26" event={"ID":"f0a0ac97-b923-44ce-8b90-31965497a560","Type":"ContainerDied","Data":"e5aadf1e3aa11acb349489c22739155115814b4e9ef4e90e3701e7d6018b2a69"} Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.058665 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.133037 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnrpg\" (UniqueName: \"kubernetes.io/projected/afab83a5-8e47-4531-80de-ae69dfd11bd9-kube-api-access-lnrpg\") pod \"afab83a5-8e47-4531-80de-ae69dfd11bd9\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.135345 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-db-sync-config-data\") pod \"afab83a5-8e47-4531-80de-ae69dfd11bd9\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.135713 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-combined-ca-bundle\") pod \"afab83a5-8e47-4531-80de-ae69dfd11bd9\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.136031 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-config-data\") pod \"afab83a5-8e47-4531-80de-ae69dfd11bd9\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.142730 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-scripts\") pod \"afab83a5-8e47-4531-80de-ae69dfd11bd9\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.146816 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/afab83a5-8e47-4531-80de-ae69dfd11bd9-etc-machine-id\") pod \"afab83a5-8e47-4531-80de-ae69dfd11bd9\" (UID: \"afab83a5-8e47-4531-80de-ae69dfd11bd9\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.159291 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afab83a5-8e47-4531-80de-ae69dfd11bd9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "afab83a5-8e47-4531-80de-ae69dfd11bd9" (UID: "afab83a5-8e47-4531-80de-ae69dfd11bd9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.165575 4865 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/afab83a5-8e47-4531-80de-ae69dfd11bd9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.199196 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afab83a5-8e47-4531-80de-ae69dfd11bd9-kube-api-access-lnrpg" (OuterVolumeSpecName: "kube-api-access-lnrpg") pod "afab83a5-8e47-4531-80de-ae69dfd11bd9" (UID: "afab83a5-8e47-4531-80de-ae69dfd11bd9"). InnerVolumeSpecName "kube-api-access-lnrpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.201368 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "afab83a5-8e47-4531-80de-ae69dfd11bd9" (UID: "afab83a5-8e47-4531-80de-ae69dfd11bd9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.234588 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afab83a5-8e47-4531-80de-ae69dfd11bd9" (UID: "afab83a5-8e47-4531-80de-ae69dfd11bd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.249977 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-scripts" (OuterVolumeSpecName: "scripts") pod "afab83a5-8e47-4531-80de-ae69dfd11bd9" (UID: "afab83a5-8e47-4531-80de-ae69dfd11bd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.275067 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnrpg\" (UniqueName: \"kubernetes.io/projected/afab83a5-8e47-4531-80de-ae69dfd11bd9-kube-api-access-lnrpg\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.275292 4865 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.275352 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.275407 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.348624 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.432844 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-config-data" (OuterVolumeSpecName: "config-data") pod "afab83a5-8e47-4531-80de-ae69dfd11bd9" (UID: "afab83a5-8e47-4531-80de-ae69dfd11bd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.478407 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-combined-ca-bundle\") pod \"8d804805-bf09-488a-80cc-ddda0ba1d466\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.478496 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-log-httpd\") pod \"8d804805-bf09-488a-80cc-ddda0ba1d466\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.478548 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-run-httpd\") pod \"8d804805-bf09-488a-80cc-ddda0ba1d466\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.478636 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nxpt\" (UniqueName: \"kubernetes.io/projected/8d804805-bf09-488a-80cc-ddda0ba1d466-kube-api-access-9nxpt\") pod \"8d804805-bf09-488a-80cc-ddda0ba1d466\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.478724 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-scripts\") pod \"8d804805-bf09-488a-80cc-ddda0ba1d466\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.478755 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-config-data\") pod \"8d804805-bf09-488a-80cc-ddda0ba1d466\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.478822 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-sg-core-conf-yaml\") pod \"8d804805-bf09-488a-80cc-ddda0ba1d466\" (UID: \"8d804805-bf09-488a-80cc-ddda0ba1d466\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.479283 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afab83a5-8e47-4531-80de-ae69dfd11bd9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.480907 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8d804805-bf09-488a-80cc-ddda0ba1d466" (UID: "8d804805-bf09-488a-80cc-ddda0ba1d466"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.481382 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8d804805-bf09-488a-80cc-ddda0ba1d466" (UID: "8d804805-bf09-488a-80cc-ddda0ba1d466"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.507853 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d804805-bf09-488a-80cc-ddda0ba1d466-kube-api-access-9nxpt" (OuterVolumeSpecName: "kube-api-access-9nxpt") pod "8d804805-bf09-488a-80cc-ddda0ba1d466" (UID: "8d804805-bf09-488a-80cc-ddda0ba1d466"). InnerVolumeSpecName "kube-api-access-9nxpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.509347 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-scripts" (OuterVolumeSpecName: "scripts") pod "8d804805-bf09-488a-80cc-ddda0ba1d466" (UID: "8d804805-bf09-488a-80cc-ddda0ba1d466"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.568528 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8d804805-bf09-488a-80cc-ddda0ba1d466" (UID: "8d804805-bf09-488a-80cc-ddda0ba1d466"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.580994 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.581026 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.581036 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d804805-bf09-488a-80cc-ddda0ba1d466-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.581053 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nxpt\" (UniqueName: \"kubernetes.io/projected/8d804805-bf09-488a-80cc-ddda0ba1d466-kube-api-access-9nxpt\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.581062 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.642754 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.657774 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d804805-bf09-488a-80cc-ddda0ba1d466" (UID: "8d804805-bf09-488a-80cc-ddda0ba1d466"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.684458 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.686409 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-config-data" (OuterVolumeSpecName: "config-data") pod "8d804805-bf09-488a-80cc-ddda0ba1d466" (UID: "8d804805-bf09-488a-80cc-ddda0ba1d466"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.723435 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d804805-bf09-488a-80cc-ddda0ba1d466","Type":"ContainerDied","Data":"b56e15a5525bc76c8254f2169753184c29f20523ea7f88ee05b08dfa585dde37"} Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.723489 4865 scope.go:117] "RemoveContainer" containerID="874e2e00a4d331d9331e4759c7fcbaac07a02b9a6dff1713f073e276d44a4530" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.724154 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.745169 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7999dc6947-8xp26" event={"ID":"f0a0ac97-b923-44ce-8b90-31965497a560","Type":"ContainerDied","Data":"1975e027324190fde92500f0ff7bf058df8fdb14f8a86b92571140f14f1d8e9e"} Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.745280 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7999dc6947-8xp26" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.761896 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sxqmn" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.761887 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sxqmn" event={"ID":"afab83a5-8e47-4531-80de-ae69dfd11bd9","Type":"ContainerDied","Data":"dbd9cde13a27ec909f15afc4f801d1138105392ad52f2d7c09e6335a038dc6b0"} Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.761979 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbd9cde13a27ec909f15afc4f801d1138105392ad52f2d7c09e6335a038dc6b0" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.770009 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-98cdcc84f-cr2jf" event={"ID":"cb23e044-f1b3-4114-94af-7aa272f670a0","Type":"ContainerStarted","Data":"671e98b7d2b8cf930abe0e27ab4f841c10867095fc85340d219498463c708051"} Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.784438 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-969599b78-gpdqv" event={"ID":"2396993e-12f6-41c3-9f09-b501bf6fb29b","Type":"ContainerStarted","Data":"48e4bbad5b912b99e3eebec1e0061f5ca9f72de6ad9b299deac580f2fdccbaf4"} Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.786733 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-scripts\") pod \"f0a0ac97-b923-44ce-8b90-31965497a560\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.786831 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a0ac97-b923-44ce-8b90-31965497a560-logs\") pod \"f0a0ac97-b923-44ce-8b90-31965497a560\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.786883 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n226c\" (UniqueName: \"kubernetes.io/projected/f0a0ac97-b923-44ce-8b90-31965497a560-kube-api-access-n226c\") pod \"f0a0ac97-b923-44ce-8b90-31965497a560\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.787564 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0a0ac97-b923-44ce-8b90-31965497a560-horizon-secret-key\") pod \"f0a0ac97-b923-44ce-8b90-31965497a560\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.787668 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-config-data\") pod \"f0a0ac97-b923-44ce-8b90-31965497a560\" (UID: \"f0a0ac97-b923-44ce-8b90-31965497a560\") " Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.787773 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0a0ac97-b923-44ce-8b90-31965497a560-logs" (OuterVolumeSpecName: "logs") pod "f0a0ac97-b923-44ce-8b90-31965497a560" (UID: "f0a0ac97-b923-44ce-8b90-31965497a560"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.788055 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d804805-bf09-488a-80cc-ddda0ba1d466-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.788069 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a0ac97-b923-44ce-8b90-31965497a560-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.807891 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0a0ac97-b923-44ce-8b90-31965497a560-kube-api-access-n226c" (OuterVolumeSpecName: "kube-api-access-n226c") pod "f0a0ac97-b923-44ce-8b90-31965497a560" (UID: "f0a0ac97-b923-44ce-8b90-31965497a560"). InnerVolumeSpecName "kube-api-access-n226c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.811445 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75dd7565cd-4skz5"] Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.819503 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0a0ac97-b923-44ce-8b90-31965497a560-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f0a0ac97-b923-44ce-8b90-31965497a560" (UID: "f0a0ac97-b923-44ce-8b90-31965497a560"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.820311 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-969599b78-gpdqv" podStartSLOduration=2.803151559 podStartE2EDuration="7.820292226s" podCreationTimestamp="2026-01-23 12:12:11 +0000 UTC" firstStartedPulling="2026-01-23 12:12:12.970021197 +0000 UTC m=+1177.139093423" lastFinishedPulling="2026-01-23 12:12:17.987161864 +0000 UTC m=+1182.156234090" observedRunningTime="2026-01-23 12:12:18.811807011 +0000 UTC m=+1182.980879227" watchObservedRunningTime="2026-01-23 12:12:18.820292226 +0000 UTC m=+1182.989364452" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.834820 4865 scope.go:117] "RemoveContainer" containerID="f7d59b78c7a56ceaeed9f18a8060713ed1873c87b27747307d57459b8efd3040" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.845681 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-config-data" (OuterVolumeSpecName: "config-data") pod "f0a0ac97-b923-44ce-8b90-31965497a560" (UID: "f0a0ac97-b923-44ce-8b90-31965497a560"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.893825 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n226c\" (UniqueName: \"kubernetes.io/projected/f0a0ac97-b923-44ce-8b90-31965497a560-kube-api-access-n226c\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.893863 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f0a0ac97-b923-44ce-8b90-31965497a560-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.893876 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.943256 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-scripts" (OuterVolumeSpecName: "scripts") pod "f0a0ac97-b923-44ce-8b90-31965497a560" (UID: "f0a0ac97-b923-44ce-8b90-31965497a560"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:18 crc kubenswrapper[4865]: I0123 12:12:18.995336 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f0a0ac97-b923-44ce-8b90-31965497a560-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.093121 4865 scope.go:117] "RemoveContainer" containerID="a1ba41f3314b057b5cae526ec0277f2a4d1fc8878115e20cde84aefec1e0fa98" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.119365 4865 scope.go:117] "RemoveContainer" containerID="32cfc548e09beda78485f718018e8e94ae7cb9906ccb77f8f7d9c77928c1d3ed" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.122660 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.135922 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.164696 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7999dc6947-8xp26"] Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.170713 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7999dc6947-8xp26"] Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.191186 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:12:19 crc kubenswrapper[4865]: E0123 12:12:19.191625 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afab83a5-8e47-4531-80de-ae69dfd11bd9" containerName="cinder-db-sync" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.191643 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="afab83a5-8e47-4531-80de-ae69dfd11bd9" containerName="cinder-db-sync" Jan 23 12:12:19 crc kubenswrapper[4865]: E0123 12:12:19.191671 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0a0ac97-b923-44ce-8b90-31965497a560" containerName="horizon" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.191679 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0a0ac97-b923-44ce-8b90-31965497a560" containerName="horizon" Jan 23 12:12:19 crc kubenswrapper[4865]: E0123 12:12:19.191697 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0a0ac97-b923-44ce-8b90-31965497a560" containerName="horizon-log" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.191705 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0a0ac97-b923-44ce-8b90-31965497a560" containerName="horizon-log" Jan 23 12:12:19 crc kubenswrapper[4865]: E0123 12:12:19.191718 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="proxy-httpd" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.191725 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="proxy-httpd" Jan 23 12:12:19 crc kubenswrapper[4865]: E0123 12:12:19.191737 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="ceilometer-notification-agent" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.191745 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="ceilometer-notification-agent" Jan 23 12:12:19 crc kubenswrapper[4865]: E0123 12:12:19.191766 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="sg-core" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.191774 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="sg-core" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.191992 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="afab83a5-8e47-4531-80de-ae69dfd11bd9" containerName="cinder-db-sync" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.192008 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="ceilometer-notification-agent" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.192021 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="proxy-httpd" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.192030 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0a0ac97-b923-44ce-8b90-31965497a560" containerName="horizon-log" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.192040 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0a0ac97-b923-44ce-8b90-31965497a560" containerName="horizon" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.192051 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" containerName="sg-core" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.193801 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.202015 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.202816 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.234336 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.303854 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.303903 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-scripts\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.303925 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hcwp\" (UniqueName: \"kubernetes.io/projected/89d6a1d7-0026-4530-a03b-65bcc436655e-kube-api-access-4hcwp\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.303985 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-config-data\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.304018 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-log-httpd\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.304057 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.304076 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-run-httpd\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.405949 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.406003 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-scripts\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.406054 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hcwp\" (UniqueName: \"kubernetes.io/projected/89d6a1d7-0026-4530-a03b-65bcc436655e-kube-api-access-4hcwp\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.406135 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-config-data\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.406197 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-log-httpd\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.406276 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.406308 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-run-httpd\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.407335 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-log-httpd\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.407496 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-run-httpd\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.412351 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-scripts\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.423643 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-config-data\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.431344 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.433631 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.436829 4865 scope.go:117] "RemoveContainer" containerID="e5aadf1e3aa11acb349489c22739155115814b4e9ef4e90e3701e7d6018b2a69" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.476750 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hcwp\" (UniqueName: \"kubernetes.io/projected/89d6a1d7-0026-4530-a03b-65bcc436655e-kube-api-access-4hcwp\") pod \"ceilometer-0\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.549709 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.656383 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.657836 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.675189 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-f7zrz" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.675397 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.675514 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.675660 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.710858 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-scripts\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.710913 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m27m4\" (UniqueName: \"kubernetes.io/projected/568083b0-5547-4699-905d-26e7ba8e510c-kube-api-access-m27m4\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.710950 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.710971 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/568083b0-5547-4699-905d-26e7ba8e510c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.711036 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.711071 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.724052 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.821483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.821797 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.821853 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-scripts\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.821872 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m27m4\" (UniqueName: \"kubernetes.io/projected/568083b0-5547-4699-905d-26e7ba8e510c-kube-api-access-m27m4\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.821905 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.821923 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/568083b0-5547-4699-905d-26e7ba8e510c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.822007 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/568083b0-5547-4699-905d-26e7ba8e510c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.877661 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-scripts\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.877713 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.878714 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.879641 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.880235 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-98cdcc84f-cr2jf" event={"ID":"cb23e044-f1b3-4114-94af-7aa272f670a0","Type":"ContainerStarted","Data":"a0a287ae2d0c1573a488aeb42d2637ecf01607a93bca7ae10c5d1886b02c6255"} Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.912803 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-969599b78-gpdqv" event={"ID":"2396993e-12f6-41c3-9f09-b501bf6fb29b","Type":"ContainerStarted","Data":"a358001fe52f4f995134580775be3fced43f185f04ea9efbad0bcb4323332470"} Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.914929 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75dd7565cd-4skz5" event={"ID":"80922e66-3668-4bf5-8bdf-ce6c9621fcd5","Type":"ContainerStarted","Data":"9291b754a5969aeea93daeaafe0059e17216d7923c6a604980898a1341cca58f"} Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.914959 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75dd7565cd-4skz5" event={"ID":"80922e66-3668-4bf5-8bdf-ce6c9621fcd5","Type":"ContainerStarted","Data":"54b2479bf207f08722a875ae061957cadef01d48c3eb98a697b899ba3f718dc5"} Jan 23 12:12:19 crc kubenswrapper[4865]: I0123 12:12:19.960962 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m27m4\" (UniqueName: \"kubernetes.io/projected/568083b0-5547-4699-905d-26e7ba8e510c-kube-api-access-m27m4\") pod \"cinder-scheduler-0\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.006397 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bcf7d6c9-7lmbs"] Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.006627 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" podUID="b330732a-65fd-4fde-bf81-fce0b551c99e" containerName="dnsmasq-dns" containerID="cri-o://2b4eac25c424bff9536bae6bf8f55f55509a215b1c7cceccae21ec5446f0182a" gracePeriod=10 Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.011891 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.022214 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.031185 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-98cdcc84f-cr2jf" podStartSLOduration=4.303206323 podStartE2EDuration="9.031170535s" podCreationTimestamp="2026-01-23 12:12:11 +0000 UTC" firstStartedPulling="2026-01-23 12:12:13.25787111 +0000 UTC m=+1177.426943336" lastFinishedPulling="2026-01-23 12:12:17.985835322 +0000 UTC m=+1182.154907548" observedRunningTime="2026-01-23 12:12:20.026965984 +0000 UTC m=+1184.196038210" watchObservedRunningTime="2026-01-23 12:12:20.031170535 +0000 UTC m=+1184.200242761" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.147961 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d804805-bf09-488a-80cc-ddda0ba1d466" path="/var/lib/kubelet/pods/8d804805-bf09-488a-80cc-ddda0ba1d466/volumes" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.148886 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0a0ac97-b923-44ce-8b90-31965497a560" path="/var/lib/kubelet/pods/f0a0ac97-b923-44ce-8b90-31965497a560/volumes" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.288988 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77f6db879-6nzmm"] Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.315496 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.399589 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77f6db879-6nzmm"] Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.453571 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-nb\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.453653 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-config\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.453689 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckhq\" (UniqueName: \"kubernetes.io/projected/113b112b-608a-4096-bf63-f06706ccc128-kube-api-access-6ckhq\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.453714 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-svc\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.453757 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-swift-storage-0\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.453782 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-sb\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.537471 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.539104 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.553207 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.555009 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-svc\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.555089 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-swift-storage-0\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.555112 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-sb\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.555195 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-nb\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.555233 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-config\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.555262 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckhq\" (UniqueName: \"kubernetes.io/projected/113b112b-608a-4096-bf63-f06706ccc128-kube-api-access-6ckhq\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.556514 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-svc\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.556530 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-sb\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.556919 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-nb\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.557245 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-config\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.557434 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-swift-storage-0\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.596765 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.634425 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckhq\" (UniqueName: \"kubernetes.io/projected/113b112b-608a-4096-bf63-f06706ccc128-kube-api-access-6ckhq\") pod \"dnsmasq-dns-77f6db879-6nzmm\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.664339 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.664421 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data-custom\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.664441 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-scripts\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.664453 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.664473 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gl9t\" (UniqueName: \"kubernetes.io/projected/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-kube-api-access-9gl9t\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.664493 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-logs\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.664510 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.713057 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.737638 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.768509 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-scripts\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.768550 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.768577 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gl9t\" (UniqueName: \"kubernetes.io/projected/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-kube-api-access-9gl9t\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.768608 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-logs\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.768627 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.768721 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.768787 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data-custom\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.770544 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.770898 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-logs\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.776849 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.777921 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.779894 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.779913 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data-custom\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.783958 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-scripts\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.837194 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gl9t\" (UniqueName: \"kubernetes.io/projected/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-kube-api-access-9gl9t\") pod \"cinder-api-0\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.900016 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.945195 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 12:12:20 crc kubenswrapper[4865]: I0123 12:12:20.985966 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89d6a1d7-0026-4530-a03b-65bcc436655e","Type":"ContainerStarted","Data":"f1926692bb390bc81a412518de0a001af0a41d838c8ef3593c0b4e261ab3b662"} Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.043730 4865 generic.go:334] "Generic (PLEG): container finished" podID="b330732a-65fd-4fde-bf81-fce0b551c99e" containerID="2b4eac25c424bff9536bae6bf8f55f55509a215b1c7cceccae21ec5446f0182a" exitCode=0 Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.044244 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" event={"ID":"b330732a-65fd-4fde-bf81-fce0b551c99e","Type":"ContainerDied","Data":"2b4eac25c424bff9536bae6bf8f55f55509a215b1c7cceccae21ec5446f0182a"} Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.692862 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.808652 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-config\") pod \"b330732a-65fd-4fde-bf81-fce0b551c99e\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.808720 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-sb\") pod \"b330732a-65fd-4fde-bf81-fce0b551c99e\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.808797 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzpwk\" (UniqueName: \"kubernetes.io/projected/b330732a-65fd-4fde-bf81-fce0b551c99e-kube-api-access-zzpwk\") pod \"b330732a-65fd-4fde-bf81-fce0b551c99e\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.808866 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-svc\") pod \"b330732a-65fd-4fde-bf81-fce0b551c99e\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.808890 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-swift-storage-0\") pod \"b330732a-65fd-4fde-bf81-fce0b551c99e\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.808933 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-nb\") pod \"b330732a-65fd-4fde-bf81-fce0b551c99e\" (UID: \"b330732a-65fd-4fde-bf81-fce0b551c99e\") " Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.845348 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b330732a-65fd-4fde-bf81-fce0b551c99e-kube-api-access-zzpwk" (OuterVolumeSpecName: "kube-api-access-zzpwk") pod "b330732a-65fd-4fde-bf81-fce0b551c99e" (UID: "b330732a-65fd-4fde-bf81-fce0b551c99e"). InnerVolumeSpecName "kube-api-access-zzpwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.912223 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzpwk\" (UniqueName: \"kubernetes.io/projected/b330732a-65fd-4fde-bf81-fce0b551c99e-kube-api-access-zzpwk\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.924529 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77f6db879-6nzmm"] Jan 23 12:12:21 crc kubenswrapper[4865]: I0123 12:12:21.974074 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.076700 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75dd7565cd-4skz5" event={"ID":"80922e66-3668-4bf5-8bdf-ce6c9621fcd5","Type":"ContainerStarted","Data":"08cc5121c234287be7546594cbc5a8d5048bb2dcc5f5067f8dae5bc2d0e7b1f4"} Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.078087 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.078118 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.085053 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"568083b0-5547-4699-905d-26e7ba8e510c","Type":"ContainerStarted","Data":"a6209d8d0b38d484f4c3cc7e101ab879368a3bd2146d5ef57a268ec9e37deff8"} Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.090883 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" event={"ID":"b330732a-65fd-4fde-bf81-fce0b551c99e","Type":"ContainerDied","Data":"123df6934fc9681ec6611fda6e6f9e24190607cee39cc7f93ba775d8266a79ef"} Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.090931 4865 scope.go:117] "RemoveContainer" containerID="2b4eac25c424bff9536bae6bf8f55f55509a215b1c7cceccae21ec5446f0182a" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.091057 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bcf7d6c9-7lmbs" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.110849 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" event={"ID":"113b112b-608a-4096-bf63-f06706ccc128","Type":"ContainerStarted","Data":"fa79d8a67489642bab9ad667fba27d18d3e161427ecc3a5059c315054c64f0bf"} Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.122156 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c4d0d746-ca83-4a4f-b0b4-355f268f47fd","Type":"ContainerStarted","Data":"3e0d7c87fbc0a3170a157044db26ee4d507e1106ef48e69f0ff8ae738d61794e"} Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.135289 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75dd7565cd-4skz5" podStartSLOduration=7.13527385 podStartE2EDuration="7.13527385s" podCreationTimestamp="2026-01-23 12:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:12:22.111106595 +0000 UTC m=+1186.280178831" watchObservedRunningTime="2026-01-23 12:12:22.13527385 +0000 UTC m=+1186.304346076" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.224814 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b330732a-65fd-4fde-bf81-fce0b551c99e" (UID: "b330732a-65fd-4fde-bf81-fce0b551c99e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.234593 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.251237 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b330732a-65fd-4fde-bf81-fce0b551c99e" (UID: "b330732a-65fd-4fde-bf81-fce0b551c99e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.272224 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b330732a-65fd-4fde-bf81-fce0b551c99e" (UID: "b330732a-65fd-4fde-bf81-fce0b551c99e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.283266 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-config" (OuterVolumeSpecName: "config") pod "b330732a-65fd-4fde-bf81-fce0b551c99e" (UID: "b330732a-65fd-4fde-bf81-fce0b551c99e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.288188 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b330732a-65fd-4fde-bf81-fce0b551c99e" (UID: "b330732a-65fd-4fde-bf81-fce0b551c99e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.289733 4865 scope.go:117] "RemoveContainer" containerID="e233be4c03d4b314dacff0df454478861e5459d40d4b1c8941dab72e2be971f4" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.336673 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.336702 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.336714 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.337009 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b330732a-65fd-4fde-bf81-fce0b551c99e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.468264 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bcf7d6c9-7lmbs"] Jan 23 12:12:22 crc kubenswrapper[4865]: I0123 12:12:22.489166 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bcf7d6c9-7lmbs"] Jan 23 12:12:23 crc kubenswrapper[4865]: I0123 12:12:23.058849 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 12:12:23 crc kubenswrapper[4865]: I0123 12:12:23.178388 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:23 crc kubenswrapper[4865]: I0123 12:12:23.181011 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89d6a1d7-0026-4530-a03b-65bcc436655e","Type":"ContainerStarted","Data":"57c67591266883497ef0bdbc1e43546d32e015ace7ba8c07961655549d534fd1"} Jan 23 12:12:24 crc kubenswrapper[4865]: I0123 12:12:24.149138 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b330732a-65fd-4fde-bf81-fce0b551c99e" path="/var/lib/kubelet/pods/b330732a-65fd-4fde-bf81-fce0b551c99e/volumes" Jan 23 12:12:24 crc kubenswrapper[4865]: I0123 12:12:24.241001 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" event={"ID":"113b112b-608a-4096-bf63-f06706ccc128","Type":"ContainerStarted","Data":"3bbcffc851a4fa105b581f1c61fdc6b0630b442a9acdb96a24259023121d8d5c"} Jan 23 12:12:24 crc kubenswrapper[4865]: I0123 12:12:24.252644 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c4d0d746-ca83-4a4f-b0b4-355f268f47fd","Type":"ContainerStarted","Data":"d2830aa1a5fae675c81384ff4ceb36e7d7766546609eeb6684c1b1137fc42610"} Jan 23 12:12:24 crc kubenswrapper[4865]: I0123 12:12:24.255628 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"568083b0-5547-4699-905d-26e7ba8e510c","Type":"ContainerStarted","Data":"a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6"} Jan 23 12:12:25 crc kubenswrapper[4865]: I0123 12:12:25.288689 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89d6a1d7-0026-4530-a03b-65bcc436655e","Type":"ContainerStarted","Data":"3ef2c875e2abe1a8810c86e36653524ea0fae9936696e5c4e3f0ea65fa7bbafc"} Jan 23 12:12:25 crc kubenswrapper[4865]: I0123 12:12:25.291799 4865 generic.go:334] "Generic (PLEG): container finished" podID="113b112b-608a-4096-bf63-f06706ccc128" containerID="3bbcffc851a4fa105b581f1c61fdc6b0630b442a9acdb96a24259023121d8d5c" exitCode=0 Jan 23 12:12:25 crc kubenswrapper[4865]: I0123 12:12:25.291897 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" event={"ID":"113b112b-608a-4096-bf63-f06706ccc128","Type":"ContainerDied","Data":"3bbcffc851a4fa105b581f1c61fdc6b0630b442a9acdb96a24259023121d8d5c"} Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.181542 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.304194 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"568083b0-5547-4699-905d-26e7ba8e510c","Type":"ContainerStarted","Data":"ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9"} Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.307170 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89d6a1d7-0026-4530-a03b-65bcc436655e","Type":"ContainerStarted","Data":"a869947817e7940600226a3079e8946e920766069a180c4538aec29c05190d1c"} Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.309274 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" event={"ID":"113b112b-608a-4096-bf63-f06706ccc128","Type":"ContainerStarted","Data":"3af9ec6073a9c6c3e08625a3a522dc6c7fbe13096d50f4d079ee344b9fd68a4e"} Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.310184 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.312635 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c4d0d746-ca83-4a4f-b0b4-355f268f47fd","Type":"ContainerStarted","Data":"e8c560f6a96e10829538cb3ad68eaf34a62dfc2c6e388bbba51ded3ea1a7fbb3"} Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.312878 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api-log" containerID="cri-o://d2830aa1a5fae675c81384ff4ceb36e7d7766546609eeb6684c1b1137fc42610" gracePeriod=30 Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.313149 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.313253 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api" containerID="cri-o://e8c560f6a96e10829538cb3ad68eaf34a62dfc2c6e388bbba51ded3ea1a7fbb3" gracePeriod=30 Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.330916 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.642224098 podStartE2EDuration="7.330894475s" podCreationTimestamp="2026-01-23 12:12:19 +0000 UTC" firstStartedPulling="2026-01-23 12:12:20.995809829 +0000 UTC m=+1185.164882055" lastFinishedPulling="2026-01-23 12:12:21.684480206 +0000 UTC m=+1185.853552432" observedRunningTime="2026-01-23 12:12:26.328437626 +0000 UTC m=+1190.497509852" watchObservedRunningTime="2026-01-23 12:12:26.330894475 +0000 UTC m=+1190.499966701" Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.367695 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" podStartSLOduration=6.367671015 podStartE2EDuration="6.367671015s" podCreationTimestamp="2026-01-23 12:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:12:26.3459629 +0000 UTC m=+1190.515035126" watchObservedRunningTime="2026-01-23 12:12:26.367671015 +0000 UTC m=+1190.536743241" Jan 23 12:12:26 crc kubenswrapper[4865]: I0123 12:12:26.399656 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.399630688 podStartE2EDuration="6.399630688s" podCreationTimestamp="2026-01-23 12:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:12:26.36665949 +0000 UTC m=+1190.535731726" watchObservedRunningTime="2026-01-23 12:12:26.399630688 +0000 UTC m=+1190.568702914" Jan 23 12:12:27 crc kubenswrapper[4865]: I0123 12:12:27.215861 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:27 crc kubenswrapper[4865]: I0123 12:12:27.216361 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:27 crc kubenswrapper[4865]: I0123 12:12:27.345952 4865 generic.go:334] "Generic (PLEG): container finished" podID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerID="d2830aa1a5fae675c81384ff4ceb36e7d7766546609eeb6684c1b1137fc42610" exitCode=143 Jan 23 12:12:27 crc kubenswrapper[4865]: I0123 12:12:27.346980 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c4d0d746-ca83-4a4f-b0b4-355f268f47fd","Type":"ContainerDied","Data":"d2830aa1a5fae675c81384ff4ceb36e7d7766546609eeb6684c1b1137fc42610"} Jan 23 12:12:28 crc kubenswrapper[4865]: I0123 12:12:28.219746 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:28 crc kubenswrapper[4865]: I0123 12:12:28.371478 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89d6a1d7-0026-4530-a03b-65bcc436655e","Type":"ContainerStarted","Data":"e5d727a5155cb3a4ec53a69a21563e71943e7d30470e871f3736d0b6f9da3d45"} Jan 23 12:12:28 crc kubenswrapper[4865]: I0123 12:12:28.398006 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.77895897 podStartE2EDuration="9.397986675s" podCreationTimestamp="2026-01-23 12:12:19 +0000 UTC" firstStartedPulling="2026-01-23 12:12:20.768781636 +0000 UTC m=+1184.937853862" lastFinishedPulling="2026-01-23 12:12:27.387809341 +0000 UTC m=+1191.556881567" observedRunningTime="2026-01-23 12:12:28.397750719 +0000 UTC m=+1192.566822945" watchObservedRunningTime="2026-01-23 12:12:28.397986675 +0000 UTC m=+1192.567058901" Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.094857 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.095288 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.096104 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"ad0bd0b06faa3989d6d91f836137fa93ac3878b4dcf0b308bb72332eb709161b"} pod="openstack/horizon-7d44bd7746-lpzlt" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.096218 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" containerID="cri-o://ad0bd0b06faa3989d6d91f836137fa93ac3878b4dcf0b308bb72332eb709161b" gracePeriod=30 Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.298094 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.362930 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.363313 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.364673 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"374cfbc4973d1db6573e1de86b64036956c8e20a8e1c4509c68c4283e2833d30"} pod="openstack/horizon-66f7b94cdb-f7pw2" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.364865 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" containerID="cri-o://374cfbc4973d1db6573e1de86b64036956c8e20a8e1c4509c68c4283e2833d30" gracePeriod=30 Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.406734 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 12:12:29 crc kubenswrapper[4865]: I0123 12:12:29.882774 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:30 crc kubenswrapper[4865]: I0123 12:12:30.022996 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 12:12:30 crc kubenswrapper[4865]: I0123 12:12:30.418999 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="568083b0-5547-4699-905d-26e7ba8e510c" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:12:30 crc kubenswrapper[4865]: I0123 12:12:30.714812 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:12:30 crc kubenswrapper[4865]: I0123 12:12:30.803087 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674c76ff67-kjrj6"] Jan 23 12:12:30 crc kubenswrapper[4865]: I0123 12:12:30.803315 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" podUID="c07bee20-b47c-4881-87bc-adba361cd25a" containerName="dnsmasq-dns" containerID="cri-o://b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017" gracePeriod=10 Jan 23 12:12:31 crc kubenswrapper[4865]: I0123 12:12:31.581333 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:12:31 crc kubenswrapper[4865]: I0123 12:12:31.753152 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.172512 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.265106 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.286650 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-config\") pod \"c07bee20-b47c-4881-87bc-adba361cd25a\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.286734 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-sb\") pod \"c07bee20-b47c-4881-87bc-adba361cd25a\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.286792 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-nb\") pod \"c07bee20-b47c-4881-87bc-adba361cd25a\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.286854 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-swift-storage-0\") pod \"c07bee20-b47c-4881-87bc-adba361cd25a\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.286974 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrt9c\" (UniqueName: \"kubernetes.io/projected/c07bee20-b47c-4881-87bc-adba361cd25a-kube-api-access-mrt9c\") pod \"c07bee20-b47c-4881-87bc-adba361cd25a\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.287044 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-svc\") pod \"c07bee20-b47c-4881-87bc-adba361cd25a\" (UID: \"c07bee20-b47c-4881-87bc-adba361cd25a\") " Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.316214 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.317492 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6fd5fd954-xn5jf" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.376630 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07bee20-b47c-4881-87bc-adba361cd25a-kube-api-access-mrt9c" (OuterVolumeSpecName: "kube-api-access-mrt9c") pod "c07bee20-b47c-4881-87bc-adba361cd25a" (UID: "c07bee20-b47c-4881-87bc-adba361cd25a"). InnerVolumeSpecName "kube-api-access-mrt9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.412182 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrt9c\" (UniqueName: \"kubernetes.io/projected/c07bee20-b47c-4881-87bc-adba361cd25a-kube-api-access-mrt9c\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.426919 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.429135 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.476698 4865 generic.go:334] "Generic (PLEG): container finished" podID="c07bee20-b47c-4881-87bc-adba361cd25a" containerID="b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017" exitCode=0 Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.486152 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.486433 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" event={"ID":"c07bee20-b47c-4881-87bc-adba361cd25a","Type":"ContainerDied","Data":"b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017"} Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.486471 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" event={"ID":"c07bee20-b47c-4881-87bc-adba361cd25a","Type":"ContainerDied","Data":"98e50244e01caccc17e5e49547f5768dddfdc15309e5ef71dca1ae6eff4e5c67"} Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.486497 4865 scope.go:117] "RemoveContainer" containerID="b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.564449 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c07bee20-b47c-4881-87bc-adba361cd25a" (UID: "c07bee20-b47c-4881-87bc-adba361cd25a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.569243 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c07bee20-b47c-4881-87bc-adba361cd25a" (UID: "c07bee20-b47c-4881-87bc-adba361cd25a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.576225 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-config" (OuterVolumeSpecName: "config") pod "c07bee20-b47c-4881-87bc-adba361cd25a" (UID: "c07bee20-b47c-4881-87bc-adba361cd25a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.592732 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c07bee20-b47c-4881-87bc-adba361cd25a" (UID: "c07bee20-b47c-4881-87bc-adba361cd25a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.598071 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c07bee20-b47c-4881-87bc-adba361cd25a" (UID: "c07bee20-b47c-4881-87bc-adba361cd25a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.601406 4865 scope.go:117] "RemoveContainer" containerID="c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.658064 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.658101 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.658115 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.658135 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.658147 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c07bee20-b47c-4881-87bc-adba361cd25a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.658878 4865 scope.go:117] "RemoveContainer" containerID="b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017" Jan 23 12:12:32 crc kubenswrapper[4865]: E0123 12:12:32.661191 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017\": container with ID starting with b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017 not found: ID does not exist" containerID="b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.661235 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017"} err="failed to get container status \"b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017\": rpc error: code = NotFound desc = could not find container \"b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017\": container with ID starting with b0d978ae0fdee2d0bdb2bba4ff418080fc1dab01de52234409ffc5d6f3020017 not found: ID does not exist" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.661264 4865 scope.go:117] "RemoveContainer" containerID="c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed" Jan 23 12:12:32 crc kubenswrapper[4865]: E0123 12:12:32.667087 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed\": container with ID starting with c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed not found: ID does not exist" containerID="c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.667154 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed"} err="failed to get container status \"c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed\": rpc error: code = NotFound desc = could not find container \"c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed\": container with ID starting with c6d64ae7cc25b548d477e021f3d8e85c1727efad0260b6f797d74257abb577ed not found: ID does not exist" Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.914776 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674c76ff67-kjrj6"] Jan 23 12:12:32 crc kubenswrapper[4865]: I0123 12:12:32.947047 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-674c76ff67-kjrj6"] Jan 23 12:12:34 crc kubenswrapper[4865]: I0123 12:12:34.129697 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07bee20-b47c-4881-87bc-adba361cd25a" path="/var/lib/kubelet/pods/c07bee20-b47c-4881-87bc-adba361cd25a/volumes" Jan 23 12:12:34 crc kubenswrapper[4865]: I0123 12:12:34.304460 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:34 crc kubenswrapper[4865]: I0123 12:12:34.359447 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:34 crc kubenswrapper[4865]: I0123 12:12:34.497182 4865 generic.go:334] "Generic (PLEG): container finished" podID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerID="ad0bd0b06faa3989d6d91f836137fa93ac3878b4dcf0b308bb72332eb709161b" exitCode=0 Jan 23 12:12:34 crc kubenswrapper[4865]: I0123 12:12:34.497235 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerDied","Data":"ad0bd0b06faa3989d6d91f836137fa93ac3878b4dcf0b308bb72332eb709161b"} Jan 23 12:12:34 crc kubenswrapper[4865]: I0123 12:12:34.497267 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerStarted","Data":"fecf4f76049da39ba53062cbfb6e4bcdfc58676fe141eea30a24d30464ca2daf"} Jan 23 12:12:34 crc kubenswrapper[4865]: I0123 12:12:34.565469 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-b9c4785f9-kx698" Jan 23 12:12:34 crc kubenswrapper[4865]: I0123 12:12:34.658967 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54757d9768-tnwjg"] Jan 23 12:12:34 crc kubenswrapper[4865]: I0123 12:12:34.659232 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54757d9768-tnwjg" podUID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" containerName="neutron-api" containerID="cri-o://9bef326e5b4a8a08fec7b1a92b5401b43f385f19fcf71f9f6fbf90a6eb10027d" gracePeriod=30 Jan 23 12:12:34 crc kubenswrapper[4865]: I0123 12:12:34.659545 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-54757d9768-tnwjg" podUID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" containerName="neutron-httpd" containerID="cri-o://e5c6758c5de86287520817f59d637a052e092caf231e0407c2f2724b9599ea14" gracePeriod=30 Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.070722 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.195150 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.511751 4865 generic.go:334] "Generic (PLEG): container finished" podID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" containerID="e5c6758c5de86287520817f59d637a052e092caf231e0407c2f2724b9599ea14" exitCode=0 Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.512139 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="568083b0-5547-4699-905d-26e7ba8e510c" containerName="cinder-scheduler" containerID="cri-o://a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6" gracePeriod=30 Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.512723 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54757d9768-tnwjg" event={"ID":"ad8dd067-54ba-4a76-a30e-7542369c3b1d","Type":"ContainerDied","Data":"e5c6758c5de86287520817f59d637a052e092caf231e0407c2f2724b9599ea14"} Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.513022 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="568083b0-5547-4699-905d-26e7ba8e510c" containerName="probe" containerID="cri-o://ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9" gracePeriod=30 Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.581865 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75dd7565cd-4skz5" Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.655300 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f5946d9d4-f849n"] Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.655776 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api-log" containerID="cri-o://8ca271485ab203ab4686b756c0caf3b9ab346a773344a13b2f2784b3e6f037f8" gracePeriod=30 Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.656146 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api" containerID="cri-o://7ed173f4be5ac5a3351c319876ccac9b2a22b1038a98c603ffb24312ae6d635d" gracePeriod=30 Jan 23 12:12:35 crc kubenswrapper[4865]: I0123 12:12:35.942883 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.170:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:36 crc kubenswrapper[4865]: I0123 12:12:36.524889 4865 generic.go:334] "Generic (PLEG): container finished" podID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerID="8ca271485ab203ab4686b756c0caf3b9ab346a773344a13b2f2784b3e6f037f8" exitCode=143 Jan 23 12:12:36 crc kubenswrapper[4865]: I0123 12:12:36.524943 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5946d9d4-f849n" event={"ID":"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6","Type":"ContainerDied","Data":"8ca271485ab203ab4686b756c0caf3b9ab346a773344a13b2f2784b3e6f037f8"} Jan 23 12:12:36 crc kubenswrapper[4865]: I0123 12:12:36.527347 4865 generic.go:334] "Generic (PLEG): container finished" podID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerID="374cfbc4973d1db6573e1de86b64036956c8e20a8e1c4509c68c4283e2833d30" exitCode=0 Jan 23 12:12:36 crc kubenswrapper[4865]: I0123 12:12:36.527378 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerDied","Data":"374cfbc4973d1db6573e1de86b64036956c8e20a8e1c4509c68c4283e2833d30"} Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.034726 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-674c76ff67-kjrj6" podUID="c07bee20-b47c-4881-87bc-adba361cd25a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.247856 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5c55dfc954-p2hjb" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.305205 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.484578 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/568083b0-5547-4699-905d-26e7ba8e510c-etc-machine-id\") pod \"568083b0-5547-4699-905d-26e7ba8e510c\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.484655 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-scripts\") pod \"568083b0-5547-4699-905d-26e7ba8e510c\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.484680 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data-custom\") pod \"568083b0-5547-4699-905d-26e7ba8e510c\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.484792 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m27m4\" (UniqueName: \"kubernetes.io/projected/568083b0-5547-4699-905d-26e7ba8e510c-kube-api-access-m27m4\") pod \"568083b0-5547-4699-905d-26e7ba8e510c\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.484881 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data\") pod \"568083b0-5547-4699-905d-26e7ba8e510c\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.484913 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-combined-ca-bundle\") pod \"568083b0-5547-4699-905d-26e7ba8e510c\" (UID: \"568083b0-5547-4699-905d-26e7ba8e510c\") " Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.485697 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/568083b0-5547-4699-905d-26e7ba8e510c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "568083b0-5547-4699-905d-26e7ba8e510c" (UID: "568083b0-5547-4699-905d-26e7ba8e510c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.502896 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568083b0-5547-4699-905d-26e7ba8e510c-kube-api-access-m27m4" (OuterVolumeSpecName: "kube-api-access-m27m4") pod "568083b0-5547-4699-905d-26e7ba8e510c" (UID: "568083b0-5547-4699-905d-26e7ba8e510c"). InnerVolumeSpecName "kube-api-access-m27m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.502980 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "568083b0-5547-4699-905d-26e7ba8e510c" (UID: "568083b0-5547-4699-905d-26e7ba8e510c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.519265 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-scripts" (OuterVolumeSpecName: "scripts") pod "568083b0-5547-4699-905d-26e7ba8e510c" (UID: "568083b0-5547-4699-905d-26e7ba8e510c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.546081 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerStarted","Data":"62c21373c0eebc0a568adb43c80526621a4e95ed48f1f7ec1047e095cb2d1298"} Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.552151 4865 generic.go:334] "Generic (PLEG): container finished" podID="568083b0-5547-4699-905d-26e7ba8e510c" containerID="ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9" exitCode=0 Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.552177 4865 generic.go:334] "Generic (PLEG): container finished" podID="568083b0-5547-4699-905d-26e7ba8e510c" containerID="a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6" exitCode=0 Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.552196 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"568083b0-5547-4699-905d-26e7ba8e510c","Type":"ContainerDied","Data":"ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9"} Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.552219 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"568083b0-5547-4699-905d-26e7ba8e510c","Type":"ContainerDied","Data":"a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6"} Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.552239 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"568083b0-5547-4699-905d-26e7ba8e510c","Type":"ContainerDied","Data":"a6209d8d0b38d484f4c3cc7e101ab879368a3bd2146d5ef57a268ec9e37deff8"} Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.552253 4865 scope.go:117] "RemoveContainer" containerID="ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.552368 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.582968 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "568083b0-5547-4699-905d-26e7ba8e510c" (UID: "568083b0-5547-4699-905d-26e7ba8e510c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.604690 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m27m4\" (UniqueName: \"kubernetes.io/projected/568083b0-5547-4699-905d-26e7ba8e510c-kube-api-access-m27m4\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.604726 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.604735 4865 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/568083b0-5547-4699-905d-26e7ba8e510c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.604746 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.604763 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.649895 4865 scope.go:117] "RemoveContainer" containerID="a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.685188 4865 scope.go:117] "RemoveContainer" containerID="ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9" Jan 23 12:12:37 crc kubenswrapper[4865]: E0123 12:12:37.687253 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9\": container with ID starting with ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9 not found: ID does not exist" containerID="ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.687388 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9"} err="failed to get container status \"ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9\": rpc error: code = NotFound desc = could not find container \"ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9\": container with ID starting with ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9 not found: ID does not exist" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.687488 4865 scope.go:117] "RemoveContainer" containerID="a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6" Jan 23 12:12:37 crc kubenswrapper[4865]: E0123 12:12:37.687977 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6\": container with ID starting with a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6 not found: ID does not exist" containerID="a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.688029 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6"} err="failed to get container status \"a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6\": rpc error: code = NotFound desc = could not find container \"a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6\": container with ID starting with a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6 not found: ID does not exist" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.688055 4865 scope.go:117] "RemoveContainer" containerID="ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.688359 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9"} err="failed to get container status \"ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9\": rpc error: code = NotFound desc = could not find container \"ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9\": container with ID starting with ca2492f69f1023523790f4dc176aaf2295f75cabae4426763d0372c22b0c3df9 not found: ID does not exist" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.688459 4865 scope.go:117] "RemoveContainer" containerID="a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.692024 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6"} err="failed to get container status \"a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6\": rpc error: code = NotFound desc = could not find container \"a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6\": container with ID starting with a6feb975847ffdfb0852a689f459d74387a91c9769919d49fee003f5213558e6 not found: ID does not exist" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.757776 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data" (OuterVolumeSpecName: "config-data") pod "568083b0-5547-4699-905d-26e7ba8e510c" (UID: "568083b0-5547-4699-905d-26e7ba8e510c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.810877 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/568083b0-5547-4699-905d-26e7ba8e510c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.899094 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.925381 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.933513 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 12:12:37 crc kubenswrapper[4865]: E0123 12:12:37.933935 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b330732a-65fd-4fde-bf81-fce0b551c99e" containerName="init" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.933958 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b330732a-65fd-4fde-bf81-fce0b551c99e" containerName="init" Jan 23 12:12:37 crc kubenswrapper[4865]: E0123 12:12:37.933970 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568083b0-5547-4699-905d-26e7ba8e510c" containerName="probe" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.933978 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="568083b0-5547-4699-905d-26e7ba8e510c" containerName="probe" Jan 23 12:12:37 crc kubenswrapper[4865]: E0123 12:12:37.933991 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07bee20-b47c-4881-87bc-adba361cd25a" containerName="init" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.933997 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07bee20-b47c-4881-87bc-adba361cd25a" containerName="init" Jan 23 12:12:37 crc kubenswrapper[4865]: E0123 12:12:37.934011 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568083b0-5547-4699-905d-26e7ba8e510c" containerName="cinder-scheduler" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.934017 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="568083b0-5547-4699-905d-26e7ba8e510c" containerName="cinder-scheduler" Jan 23 12:12:37 crc kubenswrapper[4865]: E0123 12:12:37.934028 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b330732a-65fd-4fde-bf81-fce0b551c99e" containerName="dnsmasq-dns" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.934033 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b330732a-65fd-4fde-bf81-fce0b551c99e" containerName="dnsmasq-dns" Jan 23 12:12:37 crc kubenswrapper[4865]: E0123 12:12:37.934045 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07bee20-b47c-4881-87bc-adba361cd25a" containerName="dnsmasq-dns" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.934053 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07bee20-b47c-4881-87bc-adba361cd25a" containerName="dnsmasq-dns" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.934219 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="568083b0-5547-4699-905d-26e7ba8e510c" containerName="probe" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.934231 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b330732a-65fd-4fde-bf81-fce0b551c99e" containerName="dnsmasq-dns" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.934243 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c07bee20-b47c-4881-87bc-adba361cd25a" containerName="dnsmasq-dns" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.934266 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="568083b0-5547-4699-905d-26e7ba8e510c" containerName="cinder-scheduler" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.939945 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.942476 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 12:12:37 crc kubenswrapper[4865]: I0123 12:12:37.959812 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.014652 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.014987 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.015179 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c728912d-821c-4759-b175-3fd4324ad4f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.015347 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.015497 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbl7r\" (UniqueName: \"kubernetes.io/projected/c728912d-821c-4759-b175-3fd4324ad4f2-kube-api-access-qbl7r\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.015631 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.116932 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbl7r\" (UniqueName: \"kubernetes.io/projected/c728912d-821c-4759-b175-3fd4324ad4f2-kube-api-access-qbl7r\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.117014 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.117089 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.117125 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.117179 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c728912d-821c-4759-b175-3fd4324ad4f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.117204 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.117544 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c728912d-821c-4759-b175-3fd4324ad4f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.121838 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.122780 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.125150 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.125529 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c728912d-821c-4759-b175-3fd4324ad4f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.134093 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568083b0-5547-4699-905d-26e7ba8e510c" path="/var/lib/kubelet/pods/568083b0-5547-4699-905d-26e7ba8e510c/volumes" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.140291 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbl7r\" (UniqueName: \"kubernetes.io/projected/c728912d-821c-4759-b175-3fd4324ad4f2-kube-api-access-qbl7r\") pod \"cinder-scheduler-0\" (UID: \"c728912d-821c-4759-b175-3fd4324ad4f2\") " pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.257107 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 23 12:12:38 crc kubenswrapper[4865]: I0123 12:12:38.710653 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.054325 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:37668->10.217.0.165:9311: read: connection reset by peer" Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.054394 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7f5946d9d4-f849n" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:37670->10.217.0.165:9311: read: connection reset by peer" Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.582465 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c728912d-821c-4759-b175-3fd4324ad4f2","Type":"ContainerStarted","Data":"b90d06b70eab75b79e6118cecdde760b707c68babde87350587b6fa8ceedee93"} Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.585348 4865 generic.go:334] "Generic (PLEG): container finished" podID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerID="7ed173f4be5ac5a3351c319876ccac9b2a22b1038a98c603ffb24312ae6d635d" exitCode=0 Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.585404 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5946d9d4-f849n" event={"ID":"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6","Type":"ContainerDied","Data":"7ed173f4be5ac5a3351c319876ccac9b2a22b1038a98c603ffb24312ae6d635d"} Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.585434 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7f5946d9d4-f849n" event={"ID":"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6","Type":"ContainerDied","Data":"c16ce7751d86f51b7751f6dd5dadfe2b9f99b9cf687bdb99a3eae2d9327007a5"} Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.585446 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16ce7751d86f51b7751f6dd5dadfe2b9f99b9cf687bdb99a3eae2d9327007a5" Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.802660 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.955700 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwq2s\" (UniqueName: \"kubernetes.io/projected/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-kube-api-access-qwq2s\") pod \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.955983 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data\") pod \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.956063 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data-custom\") pod \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.956136 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-logs\") pod \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.956214 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-combined-ca-bundle\") pod \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\" (UID: \"d68b36ff-cc69-4150-a9ee-8c1ba4a094c6\") " Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.964510 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" (UID: "d68b36ff-cc69-4150-a9ee-8c1ba4a094c6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.964860 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-logs" (OuterVolumeSpecName: "logs") pod "d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" (UID: "d68b36ff-cc69-4150-a9ee-8c1ba4a094c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:12:39 crc kubenswrapper[4865]: I0123 12:12:39.966492 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-kube-api-access-qwq2s" (OuterVolumeSpecName: "kube-api-access-qwq2s") pod "d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" (UID: "d68b36ff-cc69-4150-a9ee-8c1ba4a094c6"). InnerVolumeSpecName "kube-api-access-qwq2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.020565 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" (UID: "d68b36ff-cc69-4150-a9ee-8c1ba4a094c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.058992 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.059026 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.059039 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.059076 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwq2s\" (UniqueName: \"kubernetes.io/projected/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-kube-api-access-qwq2s\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.061696 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data" (OuterVolumeSpecName: "config-data") pod "d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" (UID: "d68b36ff-cc69-4150-a9ee-8c1ba4a094c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.160460 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.597072 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7f5946d9d4-f849n" Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.597246 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c728912d-821c-4759-b175-3fd4324ad4f2","Type":"ContainerStarted","Data":"69363188c023ec037365e6462967a0eb9169a136bc3d2131e45cd5a55c949188"} Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.622524 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7f5946d9d4-f849n"] Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.630830 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7f5946d9d4-f849n"] Jan 23 12:12:40 crc kubenswrapper[4865]: I0123 12:12:40.984792 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.170:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:12:41 crc kubenswrapper[4865]: I0123 12:12:41.608671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c728912d-821c-4759-b175-3fd4324ad4f2","Type":"ContainerStarted","Data":"07020e7ed8bb8833613cbe63fec1f5b862f40dbf45f5ef58a3695a353c8836aa"} Jan 23 12:12:41 crc kubenswrapper[4865]: I0123 12:12:41.635018 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.635003308 podStartE2EDuration="4.635003308s" podCreationTimestamp="2026-01-23 12:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:12:41.628931341 +0000 UTC m=+1205.798003567" watchObservedRunningTime="2026-01-23 12:12:41.635003308 +0000 UTC m=+1205.804075524" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.129508 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" path="/var/lib/kubelet/pods/d68b36ff-cc69-4150-a9ee-8c1ba4a094c6/volumes" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.416132 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 12:12:42 crc kubenswrapper[4865]: E0123 12:12:42.416550 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api-log" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.416575 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api-log" Jan 23 12:12:42 crc kubenswrapper[4865]: E0123 12:12:42.416616 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.416625 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.416845 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.416881 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68b36ff-cc69-4150-a9ee-8c1ba4a094c6" containerName="barbican-api-log" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.417521 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.422349 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bxlmh" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.424500 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.440043 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.459799 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.603496 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.603651 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.603879 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config-secret\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.603990 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-445gd\" (UniqueName: \"kubernetes.io/projected/7105ca90-1ded-41e3-ba59-5b0b101be2d6-kube-api-access-445gd\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.706130 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.706183 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.706239 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config-secret\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.706272 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-445gd\" (UniqueName: \"kubernetes.io/projected/7105ca90-1ded-41e3-ba59-5b0b101be2d6-kube-api-access-445gd\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.707105 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.734218 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.742268 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config-secret\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.759790 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-445gd\" (UniqueName: \"kubernetes.io/projected/7105ca90-1ded-41e3-ba59-5b0b101be2d6-kube-api-access-445gd\") pod \"openstackclient\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.909653 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.910207 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.939138 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.967306 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.969165 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 12:12:42 crc kubenswrapper[4865]: I0123 12:12:42.994257 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.122149 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bdac524b-c42a-4074-9ff6-98827afa19c2-openstack-config-secret\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.122232 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bdac524b-c42a-4074-9ff6-98827afa19c2-openstack-config\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.122261 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdac524b-c42a-4074-9ff6-98827afa19c2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.122345 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zv5z\" (UniqueName: \"kubernetes.io/projected/bdac524b-c42a-4074-9ff6-98827afa19c2-kube-api-access-5zv5z\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: E0123 12:12:43.165810 4865 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 23 12:12:43 crc kubenswrapper[4865]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_7105ca90-1ded-41e3-ba59-5b0b101be2d6_0(80f431afe007e7d5c17c5bd7c2d07849c6f1a7f9e7a0df7365b76b7e06a7fd27): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"80f431afe007e7d5c17c5bd7c2d07849c6f1a7f9e7a0df7365b76b7e06a7fd27" Netns:"/var/run/netns/3b396f56-097f-450f-8379-0f4a8a619160" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=80f431afe007e7d5c17c5bd7c2d07849c6f1a7f9e7a0df7365b76b7e06a7fd27;K8S_POD_UID=7105ca90-1ded-41e3-ba59-5b0b101be2d6" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/7105ca90-1ded-41e3-ba59-5b0b101be2d6]: expected pod UID "7105ca90-1ded-41e3-ba59-5b0b101be2d6" but got "bdac524b-c42a-4074-9ff6-98827afa19c2" from Kube API Jan 23 12:12:43 crc kubenswrapper[4865]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 12:12:43 crc kubenswrapper[4865]: > Jan 23 12:12:43 crc kubenswrapper[4865]: E0123 12:12:43.166129 4865 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 23 12:12:43 crc kubenswrapper[4865]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_7105ca90-1ded-41e3-ba59-5b0b101be2d6_0(80f431afe007e7d5c17c5bd7c2d07849c6f1a7f9e7a0df7365b76b7e06a7fd27): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"80f431afe007e7d5c17c5bd7c2d07849c6f1a7f9e7a0df7365b76b7e06a7fd27" Netns:"/var/run/netns/3b396f56-097f-450f-8379-0f4a8a619160" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=80f431afe007e7d5c17c5bd7c2d07849c6f1a7f9e7a0df7365b76b7e06a7fd27;K8S_POD_UID=7105ca90-1ded-41e3-ba59-5b0b101be2d6" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/7105ca90-1ded-41e3-ba59-5b0b101be2d6]: expected pod UID "7105ca90-1ded-41e3-ba59-5b0b101be2d6" but got "bdac524b-c42a-4074-9ff6-98827afa19c2" from Kube API Jan 23 12:12:43 crc kubenswrapper[4865]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 23 12:12:43 crc kubenswrapper[4865]: > pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.223963 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bdac524b-c42a-4074-9ff6-98827afa19c2-openstack-config-secret\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.224336 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bdac524b-c42a-4074-9ff6-98827afa19c2-openstack-config\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.224453 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdac524b-c42a-4074-9ff6-98827afa19c2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.224754 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zv5z\" (UniqueName: \"kubernetes.io/projected/bdac524b-c42a-4074-9ff6-98827afa19c2-kube-api-access-5zv5z\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.225211 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bdac524b-c42a-4074-9ff6-98827afa19c2-openstack-config\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.228416 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdac524b-c42a-4074-9ff6-98827afa19c2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.230099 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bdac524b-c42a-4074-9ff6-98827afa19c2-openstack-config-secret\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.247131 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zv5z\" (UniqueName: \"kubernetes.io/projected/bdac524b-c42a-4074-9ff6-98827afa19c2-kube-api-access-5zv5z\") pod \"openstackclient\" (UID: \"bdac524b-c42a-4074-9ff6-98827afa19c2\") " pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.258045 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.303329 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.635362 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.639163 4865 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7105ca90-1ded-41e3-ba59-5b0b101be2d6" podUID="bdac524b-c42a-4074-9ff6-98827afa19c2" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.663906 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.836031 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-combined-ca-bundle\") pod \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.836186 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config\") pod \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.836223 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config-secret\") pod \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.836313 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-445gd\" (UniqueName: \"kubernetes.io/projected/7105ca90-1ded-41e3-ba59-5b0b101be2d6-kube-api-access-445gd\") pod \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\" (UID: \"7105ca90-1ded-41e3-ba59-5b0b101be2d6\") " Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.837055 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7105ca90-1ded-41e3-ba59-5b0b101be2d6" (UID: "7105ca90-1ded-41e3-ba59-5b0b101be2d6"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.850168 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7105ca90-1ded-41e3-ba59-5b0b101be2d6" (UID: "7105ca90-1ded-41e3-ba59-5b0b101be2d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.853203 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7105ca90-1ded-41e3-ba59-5b0b101be2d6" (UID: "7105ca90-1ded-41e3-ba59-5b0b101be2d6"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.856818 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7105ca90-1ded-41e3-ba59-5b0b101be2d6-kube-api-access-445gd" (OuterVolumeSpecName: "kube-api-access-445gd") pod "7105ca90-1ded-41e3-ba59-5b0b101be2d6" (UID: "7105ca90-1ded-41e3-ba59-5b0b101be2d6"). InnerVolumeSpecName "kube-api-access-445gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.938001 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.938036 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.938045 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7105ca90-1ded-41e3-ba59-5b0b101be2d6-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.938055 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-445gd\" (UniqueName: \"kubernetes.io/projected/7105ca90-1ded-41e3-ba59-5b0b101be2d6-kube-api-access-445gd\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.948327 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 23 12:12:43 crc kubenswrapper[4865]: I0123 12:12:43.957568 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.089647 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.090672 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.091126 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.129719 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7105ca90-1ded-41e3-ba59-5b0b101be2d6" path="/var/lib/kubelet/pods/7105ca90-1ded-41e3-ba59-5b0b101be2d6/volumes" Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.355665 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.355777 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.656372 4865 generic.go:334] "Generic (PLEG): container finished" podID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" containerID="9bef326e5b4a8a08fec7b1a92b5401b43f385f19fcf71f9f6fbf90a6eb10027d" exitCode=0 Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.656430 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54757d9768-tnwjg" event={"ID":"ad8dd067-54ba-4a76-a30e-7542369c3b1d","Type":"ContainerDied","Data":"9bef326e5b4a8a08fec7b1a92b5401b43f385f19fcf71f9f6fbf90a6eb10027d"} Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.658111 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bdac524b-c42a-4074-9ff6-98827afa19c2","Type":"ContainerStarted","Data":"52204c9d41c921d52f8f43a5999c3b60f97dac6271baf5ab6de3d7114d73122d"} Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.658211 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.664314 4865 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="7105ca90-1ded-41e3-ba59-5b0b101be2d6" podUID="bdac524b-c42a-4074-9ff6-98827afa19c2" Jan 23 12:12:44 crc kubenswrapper[4865]: I0123 12:12:44.672882 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.269811 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.382531 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-config\") pod \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.382692 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-httpd-config\") pod \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.382820 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-combined-ca-bundle\") pod \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.382937 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4tsb\" (UniqueName: \"kubernetes.io/projected/ad8dd067-54ba-4a76-a30e-7542369c3b1d-kube-api-access-r4tsb\") pod \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.382963 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-ovndb-tls-certs\") pod \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\" (UID: \"ad8dd067-54ba-4a76-a30e-7542369c3b1d\") " Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.412956 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ad8dd067-54ba-4a76-a30e-7542369c3b1d" (UID: "ad8dd067-54ba-4a76-a30e-7542369c3b1d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.440129 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8dd067-54ba-4a76-a30e-7542369c3b1d-kube-api-access-r4tsb" (OuterVolumeSpecName: "kube-api-access-r4tsb") pod "ad8dd067-54ba-4a76-a30e-7542369c3b1d" (UID: "ad8dd067-54ba-4a76-a30e-7542369c3b1d"). InnerVolumeSpecName "kube-api-access-r4tsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.486712 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4tsb\" (UniqueName: \"kubernetes.io/projected/ad8dd067-54ba-4a76-a30e-7542369c3b1d-kube-api-access-r4tsb\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.486749 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.499997 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad8dd067-54ba-4a76-a30e-7542369c3b1d" (UID: "ad8dd067-54ba-4a76-a30e-7542369c3b1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.530680 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-config" (OuterVolumeSpecName: "config") pod "ad8dd067-54ba-4a76-a30e-7542369c3b1d" (UID: "ad8dd067-54ba-4a76-a30e-7542369c3b1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.591424 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.591451 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.597952 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ad8dd067-54ba-4a76-a30e-7542369c3b1d" (UID: "ad8dd067-54ba-4a76-a30e-7542369c3b1d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.674455 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-54757d9768-tnwjg" event={"ID":"ad8dd067-54ba-4a76-a30e-7542369c3b1d","Type":"ContainerDied","Data":"5d17a3f84ec445e1856bf147e5ef0e491987cd4c6d7ad67256194a2f888c6be9"} Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.674519 4865 scope.go:117] "RemoveContainer" containerID="e5c6758c5de86287520817f59d637a052e092caf231e0407c2f2724b9599ea14" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.674753 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-54757d9768-tnwjg" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.692793 4865 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad8dd067-54ba-4a76-a30e-7542369c3b1d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.719470 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-54757d9768-tnwjg"] Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.723492 4865 scope.go:117] "RemoveContainer" containerID="9bef326e5b4a8a08fec7b1a92b5401b43f385f19fcf71f9f6fbf90a6eb10027d" Jan 23 12:12:45 crc kubenswrapper[4865]: I0123 12:12:45.731247 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-54757d9768-tnwjg"] Jan 23 12:12:45 crc kubenswrapper[4865]: E0123 12:12:45.832846 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad8dd067_54ba_4a76_a30e_7542369c3b1d.slice\": RecentStats: unable to find data in memory cache]" Jan 23 12:12:46 crc kubenswrapper[4865]: I0123 12:12:46.143330 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" path="/var/lib/kubelet/pods/ad8dd067-54ba-4a76-a30e-7542369c3b1d/volumes" Jan 23 12:12:48 crc kubenswrapper[4865]: I0123 12:12:48.776346 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:12:48 crc kubenswrapper[4865]: I0123 12:12:48.776451 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:12:48 crc kubenswrapper[4865]: I0123 12:12:48.822845 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 12:12:49 crc kubenswrapper[4865]: I0123 12:12:49.573637 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.396754 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7cc6994f4f-2qmtv"] Jan 23 12:12:51 crc kubenswrapper[4865]: E0123 12:12:51.397368 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" containerName="neutron-api" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.397379 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" containerName="neutron-api" Jan 23 12:12:51 crc kubenswrapper[4865]: E0123 12:12:51.397394 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" containerName="neutron-httpd" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.397399 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" containerName="neutron-httpd" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.397570 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" containerName="neutron-api" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.397587 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad8dd067-54ba-4a76-a30e-7542369c3b1d" containerName="neutron-httpd" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.398433 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.402788 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.402927 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.402987 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.417789 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7cc6994f4f-2qmtv"] Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.492364 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-internal-tls-certs\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.492424 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccb11fa4-50bf-4e12-a5fa-782c911e6955-log-httpd\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.492516 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccb11fa4-50bf-4e12-a5fa-782c911e6955-run-httpd\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.492562 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb11fa4-50bf-4e12-a5fa-782c911e6955-etc-swift\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.492611 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl4cn\" (UniqueName: \"kubernetes.io/projected/ccb11fa4-50bf-4e12-a5fa-782c911e6955-kube-api-access-nl4cn\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.492627 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-public-tls-certs\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.492653 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-config-data\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.492669 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-combined-ca-bundle\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.594802 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-config-data\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.594844 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-combined-ca-bundle\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.595827 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-internal-tls-certs\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.595870 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccb11fa4-50bf-4e12-a5fa-782c911e6955-log-httpd\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.595898 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccb11fa4-50bf-4e12-a5fa-782c911e6955-run-httpd\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.595955 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb11fa4-50bf-4e12-a5fa-782c911e6955-etc-swift\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.595999 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl4cn\" (UniqueName: \"kubernetes.io/projected/ccb11fa4-50bf-4e12-a5fa-782c911e6955-kube-api-access-nl4cn\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.596015 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-public-tls-certs\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.596582 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccb11fa4-50bf-4e12-a5fa-782c911e6955-run-httpd\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.598055 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ccb11fa4-50bf-4e12-a5fa-782c911e6955-log-httpd\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.604468 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-public-tls-certs\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.612774 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-combined-ca-bundle\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.612895 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-config-data\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.615364 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccb11fa4-50bf-4e12-a5fa-782c911e6955-internal-tls-certs\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.629118 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ccb11fa4-50bf-4e12-a5fa-782c911e6955-etc-swift\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.635645 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl4cn\" (UniqueName: \"kubernetes.io/projected/ccb11fa4-50bf-4e12-a5fa-782c911e6955-kube-api-access-nl4cn\") pod \"swift-proxy-7cc6994f4f-2qmtv\" (UID: \"ccb11fa4-50bf-4e12-a5fa-782c911e6955\") " pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:51 crc kubenswrapper[4865]: I0123 12:12:51.726079 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.167910 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.169625 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="ceilometer-central-agent" containerID="cri-o://57c67591266883497ef0bdbc1e43546d32e015ace7ba8c07961655549d534fd1" gracePeriod=30 Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.170267 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="sg-core" containerID="cri-o://a869947817e7940600226a3079e8946e920766069a180c4538aec29c05190d1c" gracePeriod=30 Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.170344 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="proxy-httpd" containerID="cri-o://e5d727a5155cb3a4ec53a69a21563e71943e7d30470e871f3736d0b6f9da3d45" gracePeriod=30 Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.170423 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="ceilometer-notification-agent" containerID="cri-o://3ef2c875e2abe1a8810c86e36653524ea0fae9936696e5c4e3f0ea65fa7bbafc" gracePeriod=30 Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.774565 4865 generic.go:334] "Generic (PLEG): container finished" podID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerID="e5d727a5155cb3a4ec53a69a21563e71943e7d30470e871f3736d0b6f9da3d45" exitCode=0 Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.774623 4865 generic.go:334] "Generic (PLEG): container finished" podID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerID="a869947817e7940600226a3079e8946e920766069a180c4538aec29c05190d1c" exitCode=2 Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.774636 4865 generic.go:334] "Generic (PLEG): container finished" podID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerID="57c67591266883497ef0bdbc1e43546d32e015ace7ba8c07961655549d534fd1" exitCode=0 Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.774661 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89d6a1d7-0026-4530-a03b-65bcc436655e","Type":"ContainerDied","Data":"e5d727a5155cb3a4ec53a69a21563e71943e7d30470e871f3736d0b6f9da3d45"} Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.774694 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89d6a1d7-0026-4530-a03b-65bcc436655e","Type":"ContainerDied","Data":"a869947817e7940600226a3079e8946e920766069a180c4538aec29c05190d1c"} Jan 23 12:12:53 crc kubenswrapper[4865]: I0123 12:12:53.774707 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89d6a1d7-0026-4530-a03b-65bcc436655e","Type":"ContainerDied","Data":"57c67591266883497ef0bdbc1e43546d32e015ace7ba8c07961655549d534fd1"} Jan 23 12:12:54 crc kubenswrapper[4865]: I0123 12:12:54.089814 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 23 12:12:54 crc kubenswrapper[4865]: I0123 12:12:54.356955 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:12:54 crc kubenswrapper[4865]: I0123 12:12:54.789947 4865 generic.go:334] "Generic (PLEG): container finished" podID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerID="3ef2c875e2abe1a8810c86e36653524ea0fae9936696e5c4e3f0ea65fa7bbafc" exitCode=0 Jan 23 12:12:54 crc kubenswrapper[4865]: I0123 12:12:54.790015 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89d6a1d7-0026-4530-a03b-65bcc436655e","Type":"ContainerDied","Data":"3ef2c875e2abe1a8810c86e36653524ea0fae9936696e5c4e3f0ea65fa7bbafc"} Jan 23 12:12:56 crc kubenswrapper[4865]: I0123 12:12:56.829806 4865 generic.go:334] "Generic (PLEG): container finished" podID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerID="e8c560f6a96e10829538cb3ad68eaf34a62dfc2c6e388bbba51ded3ea1a7fbb3" exitCode=137 Jan 23 12:12:56 crc kubenswrapper[4865]: I0123 12:12:56.829880 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c4d0d746-ca83-4a4f-b0b4-355f268f47fd","Type":"ContainerDied","Data":"e8c560f6a96e10829538cb3ad68eaf34a62dfc2c6e388bbba51ded3ea1a7fbb3"} Jan 23 12:13:00 crc kubenswrapper[4865]: I0123 12:13:00.901209 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.170:8776/healthcheck\": dial tcp 10.217.0.170:8776: connect: connection refused" Jan 23 12:13:02 crc kubenswrapper[4865]: E0123 12:13:02.829203 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-openstackclient:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:13:02 crc kubenswrapper[4865]: E0123 12:13:02.829500 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-openstackclient:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:13:02 crc kubenswrapper[4865]: E0123 12:13:02.829638 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-openstackclient:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndch696h566h77h87h54chf6h695h688h54dh547h669h5fh668h89h6chf5hc9h8dhdbh55bh89h8bh55fhf8h584h58bhcdh644h67bhbh546q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5zv5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(bdac524b-c42a-4074-9ff6-98827afa19c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:13:02 crc kubenswrapper[4865]: E0123 12:13:02.831069 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="bdac524b-c42a-4074-9ff6-98827afa19c2" Jan 23 12:13:02 crc kubenswrapper[4865]: E0123 12:13:02.921714 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-openstackclient:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/openstackclient" podUID="bdac524b-c42a-4074-9ff6-98827afa19c2" Jan 23 12:13:03 crc kubenswrapper[4865]: I0123 12:13:03.856020 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7cc6994f4f-2qmtv"] Jan 23 12:13:03 crc kubenswrapper[4865]: I0123 12:13:03.931617 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"89d6a1d7-0026-4530-a03b-65bcc436655e","Type":"ContainerDied","Data":"f1926692bb390bc81a412518de0a001af0a41d838c8ef3593c0b4e261ab3b662"} Jan 23 12:13:03 crc kubenswrapper[4865]: I0123 12:13:03.931664 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1926692bb390bc81a412518de0a001af0a41d838c8ef3593c0b4e261ab3b662" Jan 23 12:13:03 crc kubenswrapper[4865]: I0123 12:13:03.933554 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c4d0d746-ca83-4a4f-b0b4-355f268f47fd","Type":"ContainerDied","Data":"3e0d7c87fbc0a3170a157044db26ee4d507e1106ef48e69f0ff8ae738d61794e"} Jan 23 12:13:03 crc kubenswrapper[4865]: I0123 12:13:03.933584 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e0d7c87fbc0a3170a157044db26ee4d507e1106ef48e69f0ff8ae738d61794e" Jan 23 12:13:03 crc kubenswrapper[4865]: I0123 12:13:03.935914 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" event={"ID":"ccb11fa4-50bf-4e12-a5fa-782c911e6955","Type":"ContainerStarted","Data":"7f184b240748f0b0482947f2b409cc4bd8c9290568829634efe7579307b4e22a"} Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.084749 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.090546 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.090659 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.091453 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"fecf4f76049da39ba53062cbfb6e4bcdfc58676fe141eea30a24d30464ca2daf"} pod="openstack/horizon-7d44bd7746-lpzlt" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.091497 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" containerID="cri-o://fecf4f76049da39ba53062cbfb6e4bcdfc58676fe141eea30a24d30464ca2daf" gracePeriod=30 Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.107576 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.168898 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gl9t\" (UniqueName: \"kubernetes.io/projected/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-kube-api-access-9gl9t\") pod \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169262 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-run-httpd\") pod \"89d6a1d7-0026-4530-a03b-65bcc436655e\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169302 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-combined-ca-bundle\") pod \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169329 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hcwp\" (UniqueName: \"kubernetes.io/projected/89d6a1d7-0026-4530-a03b-65bcc436655e-kube-api-access-4hcwp\") pod \"89d6a1d7-0026-4530-a03b-65bcc436655e\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169396 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-logs\") pod \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169435 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-etc-machine-id\") pod \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169476 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-log-httpd\") pod \"89d6a1d7-0026-4530-a03b-65bcc436655e\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169615 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data\") pod \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169645 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-sg-core-conf-yaml\") pod \"89d6a1d7-0026-4530-a03b-65bcc436655e\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169695 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-config-data\") pod \"89d6a1d7-0026-4530-a03b-65bcc436655e\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169759 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-combined-ca-bundle\") pod \"89d6a1d7-0026-4530-a03b-65bcc436655e\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169794 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-scripts\") pod \"89d6a1d7-0026-4530-a03b-65bcc436655e\" (UID: \"89d6a1d7-0026-4530-a03b-65bcc436655e\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169837 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-scripts\") pod \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.169946 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data-custom\") pod \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\" (UID: \"c4d0d746-ca83-4a4f-b0b4-355f268f47fd\") " Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.171540 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "89d6a1d7-0026-4530-a03b-65bcc436655e" (UID: "89d6a1d7-0026-4530-a03b-65bcc436655e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.173214 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-logs" (OuterVolumeSpecName: "logs") pod "c4d0d746-ca83-4a4f-b0b4-355f268f47fd" (UID: "c4d0d746-ca83-4a4f-b0b4-355f268f47fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.175779 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c4d0d746-ca83-4a4f-b0b4-355f268f47fd" (UID: "c4d0d746-ca83-4a4f-b0b4-355f268f47fd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.182868 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "89d6a1d7-0026-4530-a03b-65bcc436655e" (UID: "89d6a1d7-0026-4530-a03b-65bcc436655e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.193644 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-kube-api-access-9gl9t" (OuterVolumeSpecName: "kube-api-access-9gl9t") pod "c4d0d746-ca83-4a4f-b0b4-355f268f47fd" (UID: "c4d0d746-ca83-4a4f-b0b4-355f268f47fd"). InnerVolumeSpecName "kube-api-access-9gl9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.195414 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c4d0d746-ca83-4a4f-b0b4-355f268f47fd" (UID: "c4d0d746-ca83-4a4f-b0b4-355f268f47fd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.195562 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d6a1d7-0026-4530-a03b-65bcc436655e-kube-api-access-4hcwp" (OuterVolumeSpecName: "kube-api-access-4hcwp") pod "89d6a1d7-0026-4530-a03b-65bcc436655e" (UID: "89d6a1d7-0026-4530-a03b-65bcc436655e"). InnerVolumeSpecName "kube-api-access-4hcwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.196260 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-scripts" (OuterVolumeSpecName: "scripts") pod "c4d0d746-ca83-4a4f-b0b4-355f268f47fd" (UID: "c4d0d746-ca83-4a4f-b0b4-355f268f47fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.197879 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-scripts" (OuterVolumeSpecName: "scripts") pod "89d6a1d7-0026-4530-a03b-65bcc436655e" (UID: "89d6a1d7-0026-4530-a03b-65bcc436655e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.272162 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.272197 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.272207 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.272216 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.272226 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gl9t\" (UniqueName: \"kubernetes.io/projected/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-kube-api-access-9gl9t\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.272235 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/89d6a1d7-0026-4530-a03b-65bcc436655e-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.272244 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hcwp\" (UniqueName: \"kubernetes.io/projected/89d6a1d7-0026-4530-a03b-65bcc436655e-kube-api-access-4hcwp\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.272253 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.272261 4865 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.344053 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4d0d746-ca83-4a4f-b0b4-355f268f47fd" (UID: "c4d0d746-ca83-4a4f-b0b4-355f268f47fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.355422 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.373846 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.396881 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data" (OuterVolumeSpecName: "config-data") pod "c4d0d746-ca83-4a4f-b0b4-355f268f47fd" (UID: "c4d0d746-ca83-4a4f-b0b4-355f268f47fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.407954 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "89d6a1d7-0026-4530-a03b-65bcc436655e" (UID: "89d6a1d7-0026-4530-a03b-65bcc436655e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.475644 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.475672 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4d0d746-ca83-4a4f-b0b4-355f268f47fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.482708 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89d6a1d7-0026-4530-a03b-65bcc436655e" (UID: "89d6a1d7-0026-4530-a03b-65bcc436655e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.593863 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.675698 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-config-data" (OuterVolumeSpecName: "config-data") pod "89d6a1d7-0026-4530-a03b-65bcc436655e" (UID: "89d6a1d7-0026-4530-a03b-65bcc436655e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.696091 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d6a1d7-0026-4530-a03b-65bcc436655e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.946715 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" event={"ID":"ccb11fa4-50bf-4e12-a5fa-782c911e6955","Type":"ContainerStarted","Data":"652f1689d5dabf5923fa105a8c56f2c88c23a155c5ca6ef6730c3f33470f1ac0"} Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.946798 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" event={"ID":"ccb11fa4-50bf-4e12-a5fa-782c911e6955","Type":"ContainerStarted","Data":"775549cb37d8ce59c364f8dc08ac1ca48744ee0259537bccb2b3b31ff9fa9910"} Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.946806 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.946838 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.946845 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.947462 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:13:04 crc kubenswrapper[4865]: I0123 12:13:04.996630 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" podStartSLOduration=13.996611359 podStartE2EDuration="13.996611359s" podCreationTimestamp="2026-01-23 12:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:04.97309935 +0000 UTC m=+1229.142171576" watchObservedRunningTime="2026-01-23 12:13:04.996611359 +0000 UTC m=+1229.165683575" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.011012 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.020962 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.028365 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.035545 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.044159 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 23 12:13:05 crc kubenswrapper[4865]: E0123 12:13:05.044612 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="sg-core" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.044634 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="sg-core" Jan 23 12:13:05 crc kubenswrapper[4865]: E0123 12:13:05.044655 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.044664 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api" Jan 23 12:13:05 crc kubenswrapper[4865]: E0123 12:13:05.044681 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api-log" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.044689 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api-log" Jan 23 12:13:05 crc kubenswrapper[4865]: E0123 12:13:05.044711 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="ceilometer-notification-agent" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.044718 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="ceilometer-notification-agent" Jan 23 12:13:05 crc kubenswrapper[4865]: E0123 12:13:05.044737 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="ceilometer-central-agent" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.044744 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="ceilometer-central-agent" Jan 23 12:13:05 crc kubenswrapper[4865]: E0123 12:13:05.044758 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="proxy-httpd" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.044766 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="proxy-httpd" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.044955 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="proxy-httpd" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.044974 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="ceilometer-notification-agent" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.044990 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.045001 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" containerName="cinder-api-log" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.045020 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="sg-core" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.045031 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" containerName="ceilometer-central-agent" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.051741 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.055613 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.055857 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.056980 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.058862 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.059372 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.071760 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.072003 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.082931 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.099656 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.205548 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-config-data-custom\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.205613 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-log-httpd\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.205641 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.205733 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shtgc\" (UniqueName: \"kubernetes.io/projected/00b1558f-6054-43bb-82a7-329436ce1a0b-kube-api-access-shtgc\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.205849 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00b1558f-6054-43bb-82a7-329436ce1a0b-logs\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.205905 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.205936 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.206018 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-scripts\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.206056 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-config-data\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.206118 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-run-httpd\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.206153 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ptp9\" (UniqueName: \"kubernetes.io/projected/71a53ffd-0378-43c6-b759-61a7d90a6bdd-kube-api-access-6ptp9\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.206179 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00b1558f-6054-43bb-82a7-329436ce1a0b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.206206 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.206231 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-config-data\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.206253 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.206327 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-scripts\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308401 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-config-data-custom\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308447 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-log-httpd\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308468 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308490 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shtgc\" (UniqueName: \"kubernetes.io/projected/00b1558f-6054-43bb-82a7-329436ce1a0b-kube-api-access-shtgc\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308536 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00b1558f-6054-43bb-82a7-329436ce1a0b-logs\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308566 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308587 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308623 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-scripts\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308641 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-config-data\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308661 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-run-httpd\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308677 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ptp9\" (UniqueName: \"kubernetes.io/projected/71a53ffd-0378-43c6-b759-61a7d90a6bdd-kube-api-access-6ptp9\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308700 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00b1558f-6054-43bb-82a7-329436ce1a0b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308721 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308743 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-config-data\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308762 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.308812 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-scripts\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.312396 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-run-httpd\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.312526 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00b1558f-6054-43bb-82a7-329436ce1a0b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.312676 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-log-httpd\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.316563 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.330421 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-public-tls-certs\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.330718 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00b1558f-6054-43bb-82a7-329436ce1a0b-logs\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.333619 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-config-data-custom\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.335216 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-config-data\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.337276 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-scripts\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.337749 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-config-data\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.338035 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.338127 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.339015 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.343212 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b1558f-6054-43bb-82a7-329436ce1a0b-scripts\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.343861 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ptp9\" (UniqueName: \"kubernetes.io/projected/71a53ffd-0378-43c6-b759-61a7d90a6bdd-kube-api-access-6ptp9\") pod \"ceilometer-0\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " pod="openstack/ceilometer-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.367199 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shtgc\" (UniqueName: \"kubernetes.io/projected/00b1558f-6054-43bb-82a7-329436ce1a0b-kube-api-access-shtgc\") pod \"cinder-api-0\" (UID: \"00b1558f-6054-43bb-82a7-329436ce1a0b\") " pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.369339 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 23 12:13:05 crc kubenswrapper[4865]: I0123 12:13:05.383728 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:06 crc kubenswrapper[4865]: I0123 12:13:06.002509 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:06 crc kubenswrapper[4865]: I0123 12:13:06.112122 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 23 12:13:06 crc kubenswrapper[4865]: W0123 12:13:06.112346 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00b1558f_6054_43bb_82a7_329436ce1a0b.slice/crio-313308854ccbc0e9c1cb83fc63c0f164a1fbc2f9d23cbbffcb592332e734bf60 WatchSource:0}: Error finding container 313308854ccbc0e9c1cb83fc63c0f164a1fbc2f9d23cbbffcb592332e734bf60: Status 404 returned error can't find the container with id 313308854ccbc0e9c1cb83fc63c0f164a1fbc2f9d23cbbffcb592332e734bf60 Jan 23 12:13:06 crc kubenswrapper[4865]: I0123 12:13:06.133679 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d6a1d7-0026-4530-a03b-65bcc436655e" path="/var/lib/kubelet/pods/89d6a1d7-0026-4530-a03b-65bcc436655e/volumes" Jan 23 12:13:06 crc kubenswrapper[4865]: I0123 12:13:06.135057 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d0d746-ca83-4a4f-b0b4-355f268f47fd" path="/var/lib/kubelet/pods/c4d0d746-ca83-4a4f-b0b4-355f268f47fd/volumes" Jan 23 12:13:06 crc kubenswrapper[4865]: I0123 12:13:06.990436 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00b1558f-6054-43bb-82a7-329436ce1a0b","Type":"ContainerStarted","Data":"313308854ccbc0e9c1cb83fc63c0f164a1fbc2f9d23cbbffcb592332e734bf60"} Jan 23 12:13:07 crc kubenswrapper[4865]: I0123 12:13:07.033138 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71a53ffd-0378-43c6-b759-61a7d90a6bdd","Type":"ContainerStarted","Data":"4c809754647e2c4befea0d2e8c57b42b56c810a6911c9d3bdc8c04ac9369eba7"} Jan 23 12:13:07 crc kubenswrapper[4865]: I0123 12:13:07.664779 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.606737 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-696547d8cb-9scxl"] Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.612117 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.624087 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gqlj9" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.624735 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.636992 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.668508 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-696547d8cb-9scxl"] Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.707805 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwpkm\" (UniqueName: \"kubernetes.io/projected/8de820a5-b4df-48bb-aa66-756fe92e787d-kube-api-access-vwpkm\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.708111 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.708249 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-combined-ca-bundle\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.708409 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data-custom\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.801011 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8bb9b79-km2l6"] Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.802447 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.810573 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data-custom\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.810682 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwpkm\" (UniqueName: \"kubernetes.io/projected/8de820a5-b4df-48bb-aa66-756fe92e787d-kube-api-access-vwpkm\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.810721 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.810773 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-combined-ca-bundle\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.822638 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-combined-ca-bundle\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.827386 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data-custom\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.835858 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.855099 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwpkm\" (UniqueName: \"kubernetes.io/projected/8de820a5-b4df-48bb-aa66-756fe92e787d-kube-api-access-vwpkm\") pod \"heat-engine-696547d8cb-9scxl\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.861898 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8bb9b79-km2l6"] Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.931580 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.931777 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.931921 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-config\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.932003 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-svc\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.932045 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b52wl\" (UniqueName: \"kubernetes.io/projected/dcc29799-616d-44f6-8cee-0518f590df2e-kube-api-access-b52wl\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.932107 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-swift-storage-0\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.946635 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7765485f88-srzk9"] Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.949340 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.956077 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.960873 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7765485f88-srzk9"] Jan 23 12:13:08 crc kubenswrapper[4865]: I0123 12:13:08.972791 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.018047 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-649fdd8b77-7ffrr"] Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.019529 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.021290 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.040169 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-config\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.040278 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data-custom\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.040311 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-svc\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.040347 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b52wl\" (UniqueName: \"kubernetes.io/projected/dcc29799-616d-44f6-8cee-0518f590df2e-kube-api-access-b52wl\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.040394 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-swift-storage-0\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.040465 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.040548 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.041313 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-config\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.049334 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-svc\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.050065 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-swift-storage-0\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.050799 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.052014 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.052059 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-combined-ca-bundle\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.052103 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6hs2\" (UniqueName: \"kubernetes.io/projected/e34e3f6e-75ef-442d-a746-030072cde322-kube-api-access-b6hs2\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.052920 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.075361 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b52wl\" (UniqueName: \"kubernetes.io/projected/dcc29799-616d-44f6-8cee-0518f590df2e-kube-api-access-b52wl\") pod \"dnsmasq-dns-b8bb9b79-km2l6\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.078195 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00b1558f-6054-43bb-82a7-329436ce1a0b","Type":"ContainerStarted","Data":"a3217e4577dbb49d6f8934e59e52397f42f29fcc588403fd9a007a7f0f22e15f"} Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.136437 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71a53ffd-0378-43c6-b759-61a7d90a6bdd","Type":"ContainerStarted","Data":"01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49"} Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.158655 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data-custom\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.158787 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data-custom\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.181037 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data-custom\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.184759 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.184945 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.184990 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbg6s\" (UniqueName: \"kubernetes.io/projected/3d5c794f-d61a-46ab-b696-56f903df1451-kube-api-access-lbg6s\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.185018 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-combined-ca-bundle\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.185170 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-combined-ca-bundle\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.185222 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6hs2\" (UniqueName: \"kubernetes.io/projected/e34e3f6e-75ef-442d-a746-030072cde322-kube-api-access-b6hs2\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.196221 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-649fdd8b77-7ffrr"] Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.200309 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-combined-ca-bundle\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.205513 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.219151 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.262541 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6hs2\" (UniqueName: \"kubernetes.io/projected/e34e3f6e-75ef-442d-a746-030072cde322-kube-api-access-b6hs2\") pod \"heat-api-649fdd8b77-7ffrr\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.337734 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data-custom\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.337891 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.337925 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbg6s\" (UniqueName: \"kubernetes.io/projected/3d5c794f-d61a-46ab-b696-56f903df1451-kube-api-access-lbg6s\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.337947 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-combined-ca-bundle\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.381722 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.382688 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-combined-ca-bundle\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.384549 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data-custom\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.394545 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbg6s\" (UniqueName: \"kubernetes.io/projected/3d5c794f-d61a-46ab-b696-56f903df1451-kube-api-access-lbg6s\") pod \"heat-cfnapi-7765485f88-srzk9\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.487733 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.617698 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:09 crc kubenswrapper[4865]: I0123 12:13:09.830246 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-696547d8cb-9scxl"] Jan 23 12:13:10 crc kubenswrapper[4865]: I0123 12:13:10.246420 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8bb9b79-km2l6"] Jan 23 12:13:10 crc kubenswrapper[4865]: I0123 12:13:10.299159 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71a53ffd-0378-43c6-b759-61a7d90a6bdd","Type":"ContainerStarted","Data":"7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034"} Jan 23 12:13:10 crc kubenswrapper[4865]: W0123 12:13:10.300010 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcc29799_616d_44f6_8cee_0518f590df2e.slice/crio-2e37bf5305e6199093e4cba5f0d0de2b19527ad5d8783ea495acf27ef8c916b5 WatchSource:0}: Error finding container 2e37bf5305e6199093e4cba5f0d0de2b19527ad5d8783ea495acf27ef8c916b5: Status 404 returned error can't find the container with id 2e37bf5305e6199093e4cba5f0d0de2b19527ad5d8783ea495acf27ef8c916b5 Jan 23 12:13:10 crc kubenswrapper[4865]: I0123 12:13:10.302228 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-696547d8cb-9scxl" event={"ID":"8de820a5-b4df-48bb-aa66-756fe92e787d","Type":"ContainerStarted","Data":"f3bf1575715eada97a91318fd65994af6ed350492f78c3a3dd6b2f07a0731959"} Jan 23 12:13:10 crc kubenswrapper[4865]: I0123 12:13:10.331793 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"00b1558f-6054-43bb-82a7-329436ce1a0b","Type":"ContainerStarted","Data":"a1581f28798f2867b9f47c4881aa9f9c4583a9c68020e7be7ab4db2f390d0f14"} Jan 23 12:13:10 crc kubenswrapper[4865]: I0123 12:13:10.332680 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 23 12:13:10 crc kubenswrapper[4865]: I0123 12:13:10.411486 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.411470106 podStartE2EDuration="5.411470106s" podCreationTimestamp="2026-01-23 12:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:10.38395897 +0000 UTC m=+1234.553031196" watchObservedRunningTime="2026-01-23 12:13:10.411470106 +0000 UTC m=+1234.580542332" Jan 23 12:13:10 crc kubenswrapper[4865]: I0123 12:13:10.451234 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-649fdd8b77-7ffrr"] Jan 23 12:13:10 crc kubenswrapper[4865]: I0123 12:13:10.556516 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7765485f88-srzk9"] Jan 23 12:13:10 crc kubenswrapper[4865]: I0123 12:13:10.872821 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:13:11 crc kubenswrapper[4865]: I0123 12:13:11.349108 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7765485f88-srzk9" event={"ID":"3d5c794f-d61a-46ab-b696-56f903df1451","Type":"ContainerStarted","Data":"3c7334385a0fda36a082c6126e0609cac8bddeb8e3ca516bdab65c2cfae0ea02"} Jan 23 12:13:11 crc kubenswrapper[4865]: I0123 12:13:11.359679 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-649fdd8b77-7ffrr" event={"ID":"e34e3f6e-75ef-442d-a746-030072cde322","Type":"ContainerStarted","Data":"22f03eff6ebcd64e46488cfa97e11cbcf2c7e0f6d3dc628fa9e6c0b4ec71e9b1"} Jan 23 12:13:11 crc kubenswrapper[4865]: I0123 12:13:11.361934 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" event={"ID":"dcc29799-616d-44f6-8cee-0518f590df2e","Type":"ContainerStarted","Data":"c0ad45a1201dd30467a79e3d63b687284caa67af648a7a6e5b67eafaf1974870"} Jan 23 12:13:11 crc kubenswrapper[4865]: I0123 12:13:11.361968 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" event={"ID":"dcc29799-616d-44f6-8cee-0518f590df2e","Type":"ContainerStarted","Data":"2e37bf5305e6199093e4cba5f0d0de2b19527ad5d8783ea495acf27ef8c916b5"} Jan 23 12:13:11 crc kubenswrapper[4865]: I0123 12:13:11.374309 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71a53ffd-0378-43c6-b759-61a7d90a6bdd","Type":"ContainerStarted","Data":"a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679"} Jan 23 12:13:11 crc kubenswrapper[4865]: I0123 12:13:11.741937 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:13:11 crc kubenswrapper[4865]: I0123 12:13:11.746759 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" Jan 23 12:13:12 crc kubenswrapper[4865]: I0123 12:13:12.382360 4865 generic.go:334] "Generic (PLEG): container finished" podID="dcc29799-616d-44f6-8cee-0518f590df2e" containerID="c0ad45a1201dd30467a79e3d63b687284caa67af648a7a6e5b67eafaf1974870" exitCode=0 Jan 23 12:13:12 crc kubenswrapper[4865]: I0123 12:13:12.382482 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" event={"ID":"dcc29799-616d-44f6-8cee-0518f590df2e","Type":"ContainerDied","Data":"c0ad45a1201dd30467a79e3d63b687284caa67af648a7a6e5b67eafaf1974870"} Jan 23 12:13:12 crc kubenswrapper[4865]: I0123 12:13:12.385065 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-696547d8cb-9scxl" event={"ID":"8de820a5-b4df-48bb-aa66-756fe92e787d","Type":"ContainerStarted","Data":"191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106"} Jan 23 12:13:12 crc kubenswrapper[4865]: I0123 12:13:12.385444 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:12 crc kubenswrapper[4865]: I0123 12:13:12.450381 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-696547d8cb-9scxl" podStartSLOduration=4.450360813 podStartE2EDuration="4.450360813s" podCreationTimestamp="2026-01-23 12:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:12.448950089 +0000 UTC m=+1236.618022315" watchObservedRunningTime="2026-01-23 12:13:12.450360813 +0000 UTC m=+1236.619433049" Jan 23 12:13:14 crc kubenswrapper[4865]: I0123 12:13:14.356093 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:13:14 crc kubenswrapper[4865]: I0123 12:13:14.356494 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:13:14 crc kubenswrapper[4865]: I0123 12:13:14.357466 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"62c21373c0eebc0a568adb43c80526621a4e95ed48f1f7ec1047e095cb2d1298"} pod="openstack/horizon-66f7b94cdb-f7pw2" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 12:13:14 crc kubenswrapper[4865]: I0123 12:13:14.357503 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" containerID="cri-o://62c21373c0eebc0a568adb43c80526621a4e95ed48f1f7ec1047e095cb2d1298" gracePeriod=30 Jan 23 12:13:15 crc kubenswrapper[4865]: I0123 12:13:15.411563 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" event={"ID":"dcc29799-616d-44f6-8cee-0518f590df2e","Type":"ContainerStarted","Data":"0ac463c061810ecb03feb2411a902e25148ea04e2d567810ca46c09875ee333d"} Jan 23 12:13:15 crc kubenswrapper[4865]: I0123 12:13:15.412214 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:15 crc kubenswrapper[4865]: I0123 12:13:15.444495 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" podStartSLOduration=7.444476127 podStartE2EDuration="7.444476127s" podCreationTimestamp="2026-01-23 12:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:15.431616415 +0000 UTC m=+1239.600688641" watchObservedRunningTime="2026-01-23 12:13:15.444476127 +0000 UTC m=+1239.613548353" Jan 23 12:13:17 crc kubenswrapper[4865]: I0123 12:13:17.446262 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bdac524b-c42a-4074-9ff6-98827afa19c2","Type":"ContainerStarted","Data":"93f48d54b3f4523e67a3932be43ffd4b495bbded9d678f3f337e2bf07d67145e"} Jan 23 12:13:17 crc kubenswrapper[4865]: I0123 12:13:17.468548 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.655542661 podStartE2EDuration="35.468531055s" podCreationTimestamp="2026-01-23 12:12:42 +0000 UTC" firstStartedPulling="2026-01-23 12:12:43.957309041 +0000 UTC m=+1208.126381267" lastFinishedPulling="2026-01-23 12:13:16.770297425 +0000 UTC m=+1240.939369661" observedRunningTime="2026-01-23 12:13:17.463671478 +0000 UTC m=+1241.632743704" watchObservedRunningTime="2026-01-23 12:13:17.468531055 +0000 UTC m=+1241.637603271" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.347623 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5ccd964666-7jplv"] Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.348770 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.360272 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5ccd964666-7jplv"] Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.400798 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed8cf6e-f362-4ea6-b453-ed132931c457-config-data\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.400867 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed8cf6e-f362-4ea6-b453-ed132931c457-combined-ca-bundle\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.401096 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2zc\" (UniqueName: \"kubernetes.io/projected/bed8cf6e-f362-4ea6-b453-ed132931c457-kube-api-access-rc2zc\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.401208 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bed8cf6e-f362-4ea6-b453-ed132931c457-config-data-custom\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.414686 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7779444d7-wsm4m"] Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.416214 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.438073 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7779444d7-wsm4m"] Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.459931 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6c9b8c4c44-r25v5"] Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.461070 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.471835 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7765485f88-srzk9" event={"ID":"3d5c794f-d61a-46ab-b696-56f903df1451","Type":"ContainerStarted","Data":"0f1f8c811d386b68987a7f8c898cba546d3c9c471d06b17b6bea971cc50959bb"} Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.472655 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.484933 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-649fdd8b77-7ffrr" event={"ID":"e34e3f6e-75ef-442d-a746-030072cde322","Type":"ContainerStarted","Data":"cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584"} Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.485794 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.496467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71a53ffd-0378-43c6-b759-61a7d90a6bdd","Type":"ContainerStarted","Data":"7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6"} Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.496668 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="ceilometer-central-agent" containerID="cri-o://01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49" gracePeriod=30 Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.496936 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.496991 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="proxy-httpd" containerID="cri-o://7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6" gracePeriod=30 Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.497050 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="sg-core" containerID="cri-o://a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679" gracePeriod=30 Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.497099 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="ceilometer-notification-agent" containerID="cri-o://7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034" gracePeriod=30 Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.508463 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c9b8c4c44-r25v5"] Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.509806 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed8cf6e-f362-4ea6-b453-ed132931c457-combined-ca-bundle\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.510979 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc2zc\" (UniqueName: \"kubernetes.io/projected/bed8cf6e-f362-4ea6-b453-ed132931c457-kube-api-access-rc2zc\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.511060 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data-custom\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.511106 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bed8cf6e-f362-4ea6-b453-ed132931c457-config-data-custom\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.511148 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.511240 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs8hm\" (UniqueName: \"kubernetes.io/projected/dbcf892f-3faf-4d86-8ea9-371d6646a64a-kube-api-access-qs8hm\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.511270 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-combined-ca-bundle\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.511369 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed8cf6e-f362-4ea6-b453-ed132931c457-config-data\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.523128 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bed8cf6e-f362-4ea6-b453-ed132931c457-config-data\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.547887 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7765485f88-srzk9" podStartSLOduration=4.452190438 podStartE2EDuration="10.547864042s" podCreationTimestamp="2026-01-23 12:13:08 +0000 UTC" firstStartedPulling="2026-01-23 12:13:10.619650322 +0000 UTC m=+1234.788722548" lastFinishedPulling="2026-01-23 12:13:16.715323926 +0000 UTC m=+1240.884396152" observedRunningTime="2026-01-23 12:13:18.508180733 +0000 UTC m=+1242.677252959" watchObservedRunningTime="2026-01-23 12:13:18.547864042 +0000 UTC m=+1242.716936268" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.548766 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bed8cf6e-f362-4ea6-b453-ed132931c457-config-data-custom\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.552216 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bed8cf6e-f362-4ea6-b453-ed132931c457-combined-ca-bundle\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.557434 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc2zc\" (UniqueName: \"kubernetes.io/projected/bed8cf6e-f362-4ea6-b453-ed132931c457-kube-api-access-rc2zc\") pod \"heat-engine-5ccd964666-7jplv\" (UID: \"bed8cf6e-f362-4ea6-b453-ed132931c457\") " pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.580460 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.03531027 podStartE2EDuration="13.58043439s" podCreationTimestamp="2026-01-23 12:13:05 +0000 UTC" firstStartedPulling="2026-01-23 12:13:06.006689681 +0000 UTC m=+1230.175761907" lastFinishedPulling="2026-01-23 12:13:16.551813801 +0000 UTC m=+1240.720886027" observedRunningTime="2026-01-23 12:13:18.548087798 +0000 UTC m=+1242.717160034" watchObservedRunningTime="2026-01-23 12:13:18.58043439 +0000 UTC m=+1242.749506616" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.608150 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-649fdd8b77-7ffrr" podStartSLOduration=4.489728885 podStartE2EDuration="10.60813137s" podCreationTimestamp="2026-01-23 12:13:08 +0000 UTC" firstStartedPulling="2026-01-23 12:13:10.603783827 +0000 UTC m=+1234.772856053" lastFinishedPulling="2026-01-23 12:13:16.722186312 +0000 UTC m=+1240.891258538" observedRunningTime="2026-01-23 12:13:18.597695938 +0000 UTC m=+1242.766768184" watchObservedRunningTime="2026-01-23 12:13:18.60813137 +0000 UTC m=+1242.777203596" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.613482 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs8hm\" (UniqueName: \"kubernetes.io/projected/dbcf892f-3faf-4d86-8ea9-371d6646a64a-kube-api-access-qs8hm\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.613740 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-combined-ca-bundle\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.613977 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8zl\" (UniqueName: \"kubernetes.io/projected/c9294575-8075-48ad-9c05-2e351d1c700a-kube-api-access-vd8zl\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.614190 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-combined-ca-bundle\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.614339 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data-custom\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.614507 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.614703 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data-custom\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.614829 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.620763 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data-custom\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.620834 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.630563 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-combined-ca-bundle\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.631241 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs8hm\" (UniqueName: \"kubernetes.io/projected/dbcf892f-3faf-4d86-8ea9-371d6646a64a-kube-api-access-qs8hm\") pod \"heat-cfnapi-7779444d7-wsm4m\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.667566 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.718147 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8zl\" (UniqueName: \"kubernetes.io/projected/c9294575-8075-48ad-9c05-2e351d1c700a-kube-api-access-vd8zl\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.718227 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-combined-ca-bundle\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.718271 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data-custom\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.718293 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.725333 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data-custom\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.725817 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-combined-ca-bundle\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.726407 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.736807 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.751507 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8zl\" (UniqueName: \"kubernetes.io/projected/c9294575-8075-48ad-9c05-2e351d1c700a-kube-api-access-vd8zl\") pod \"heat-api-6c9b8c4c44-r25v5\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.776590 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.776673 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:13:18 crc kubenswrapper[4865]: I0123 12:13:18.792046 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.221764 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.293662 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77f6db879-6nzmm"] Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.294033 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" podUID="113b112b-608a-4096-bf63-f06706ccc128" containerName="dnsmasq-dns" containerID="cri-o://3af9ec6073a9c6c3e08625a3a522dc6c7fbe13096d50f4d079ee344b9fd68a4e" gracePeriod=10 Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.378554 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="00b1558f-6054-43bb-82a7-329436ce1a0b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.175:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.389499 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5ccd964666-7jplv"] Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.416297 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7779444d7-wsm4m"] Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.532116 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7779444d7-wsm4m" event={"ID":"dbcf892f-3faf-4d86-8ea9-371d6646a64a","Type":"ContainerStarted","Data":"f8cbd6b0a46c534dfd93a01e60a674e4bc98b2623d4a085949ad937e5c647f6f"} Jan 23 12:13:19 crc kubenswrapper[4865]: W0123 12:13:19.543285 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9294575_8075_48ad_9c05_2e351d1c700a.slice/crio-948a31ccbeb7bbb60a38f0bf6e3c489e6a97c8a954e3fbfa7539d63955ab9040 WatchSource:0}: Error finding container 948a31ccbeb7bbb60a38f0bf6e3c489e6a97c8a954e3fbfa7539d63955ab9040: Status 404 returned error can't find the container with id 948a31ccbeb7bbb60a38f0bf6e3c489e6a97c8a954e3fbfa7539d63955ab9040 Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.544528 4865 generic.go:334] "Generic (PLEG): container finished" podID="113b112b-608a-4096-bf63-f06706ccc128" containerID="3af9ec6073a9c6c3e08625a3a522dc6c7fbe13096d50f4d079ee344b9fd68a4e" exitCode=0 Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.544587 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" event={"ID":"113b112b-608a-4096-bf63-f06706ccc128","Type":"ContainerDied","Data":"3af9ec6073a9c6c3e08625a3a522dc6c7fbe13096d50f4d079ee344b9fd68a4e"} Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.546144 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5ccd964666-7jplv" event={"ID":"bed8cf6e-f362-4ea6-b453-ed132931c457","Type":"ContainerStarted","Data":"868222df224cb95b73eeb92b1862a9368122eb8323514580b943f8a6eaa1bbe1"} Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.562906 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c9b8c4c44-r25v5"] Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.590420 4865 generic.go:334] "Generic (PLEG): container finished" podID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerID="7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6" exitCode=0 Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.590449 4865 generic.go:334] "Generic (PLEG): container finished" podID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerID="a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679" exitCode=2 Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.590457 4865 generic.go:334] "Generic (PLEG): container finished" podID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerID="7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034" exitCode=0 Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.590741 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71a53ffd-0378-43c6-b759-61a7d90a6bdd","Type":"ContainerDied","Data":"7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6"} Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.590828 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71a53ffd-0378-43c6-b759-61a7d90a6bdd","Type":"ContainerDied","Data":"a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679"} Jan 23 12:13:19 crc kubenswrapper[4865]: I0123 12:13:19.590896 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71a53ffd-0378-43c6-b759-61a7d90a6bdd","Type":"ContainerDied","Data":"7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034"} Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.022652 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.166370 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-svc\") pod \"113b112b-608a-4096-bf63-f06706ccc128\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.166746 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-sb\") pod \"113b112b-608a-4096-bf63-f06706ccc128\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.166804 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-config\") pod \"113b112b-608a-4096-bf63-f06706ccc128\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.166895 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ckhq\" (UniqueName: \"kubernetes.io/projected/113b112b-608a-4096-bf63-f06706ccc128-kube-api-access-6ckhq\") pod \"113b112b-608a-4096-bf63-f06706ccc128\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.166946 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-nb\") pod \"113b112b-608a-4096-bf63-f06706ccc128\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.166989 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-swift-storage-0\") pod \"113b112b-608a-4096-bf63-f06706ccc128\" (UID: \"113b112b-608a-4096-bf63-f06706ccc128\") " Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.246964 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113b112b-608a-4096-bf63-f06706ccc128-kube-api-access-6ckhq" (OuterVolumeSpecName: "kube-api-access-6ckhq") pod "113b112b-608a-4096-bf63-f06706ccc128" (UID: "113b112b-608a-4096-bf63-f06706ccc128"). InnerVolumeSpecName "kube-api-access-6ckhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.269280 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ckhq\" (UniqueName: \"kubernetes.io/projected/113b112b-608a-4096-bf63-f06706ccc128-kube-api-access-6ckhq\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.314849 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "113b112b-608a-4096-bf63-f06706ccc128" (UID: "113b112b-608a-4096-bf63-f06706ccc128"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.372864 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.374069 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "113b112b-608a-4096-bf63-f06706ccc128" (UID: "113b112b-608a-4096-bf63-f06706ccc128"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.377150 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="00b1558f-6054-43bb-82a7-329436ce1a0b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.175:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.380709 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "113b112b-608a-4096-bf63-f06706ccc128" (UID: "113b112b-608a-4096-bf63-f06706ccc128"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.403563 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-config" (OuterVolumeSpecName: "config") pod "113b112b-608a-4096-bf63-f06706ccc128" (UID: "113b112b-608a-4096-bf63-f06706ccc128"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.427064 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "113b112b-608a-4096-bf63-f06706ccc128" (UID: "113b112b-608a-4096-bf63-f06706ccc128"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.474763 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.474807 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.474816 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.474829 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/113b112b-608a-4096-bf63-f06706ccc128-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.601690 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5ccd964666-7jplv" event={"ID":"bed8cf6e-f362-4ea6-b453-ed132931c457","Type":"ContainerStarted","Data":"39800147579c960150eb8b6238a1158c8a8e0ddbd4282755826b892ba6b9d691"} Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.602855 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.608834 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c9b8c4c44-r25v5" event={"ID":"c9294575-8075-48ad-9c05-2e351d1c700a","Type":"ContainerStarted","Data":"4dec5b6ed3fbaff061efa1f794955c0fca85a35fc5f4778f236fe10eb277a800"} Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.608883 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c9b8c4c44-r25v5" event={"ID":"c9294575-8075-48ad-9c05-2e351d1c700a","Type":"ContainerStarted","Data":"948a31ccbeb7bbb60a38f0bf6e3c489e6a97c8a954e3fbfa7539d63955ab9040"} Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.608991 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.616251 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7779444d7-wsm4m" event={"ID":"dbcf892f-3faf-4d86-8ea9-371d6646a64a","Type":"ContainerStarted","Data":"83f86924c7386555a7914a5dc8dbbfa8a037f17f2bf76581b930c66f22cda642"} Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.616477 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.619447 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" event={"ID":"113b112b-608a-4096-bf63-f06706ccc128","Type":"ContainerDied","Data":"fa79d8a67489642bab9ad667fba27d18d3e161427ecc3a5059c315054c64f0bf"} Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.619501 4865 scope.go:117] "RemoveContainer" containerID="3af9ec6073a9c6c3e08625a3a522dc6c7fbe13096d50f4d079ee344b9fd68a4e" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.619764 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77f6db879-6nzmm" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.627669 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5ccd964666-7jplv" podStartSLOduration=2.6276504689999998 podStartE2EDuration="2.627650469s" podCreationTimestamp="2026-01-23 12:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:20.626535203 +0000 UTC m=+1244.795607419" watchObservedRunningTime="2026-01-23 12:13:20.627650469 +0000 UTC m=+1244.796722695" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.665541 4865 scope.go:117] "RemoveContainer" containerID="3bbcffc851a4fa105b581f1c61fdc6b0630b442a9acdb96a24259023121d8d5c" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.690973 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6c9b8c4c44-r25v5" podStartSLOduration=2.69092877 podStartE2EDuration="2.69092877s" podCreationTimestamp="2026-01-23 12:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:20.673028967 +0000 UTC m=+1244.842101193" watchObservedRunningTime="2026-01-23 12:13:20.69092877 +0000 UTC m=+1244.860000996" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.725014 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7779444d7-wsm4m" podStartSLOduration=2.724992494 podStartE2EDuration="2.724992494s" podCreationTimestamp="2026-01-23 12:13:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:20.724947393 +0000 UTC m=+1244.894019609" watchObservedRunningTime="2026-01-23 12:13:20.724992494 +0000 UTC m=+1244.894064730" Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.779681 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77f6db879-6nzmm"] Jan 23 12:13:20 crc kubenswrapper[4865]: I0123 12:13:20.805071 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77f6db879-6nzmm"] Jan 23 12:13:21 crc kubenswrapper[4865]: I0123 12:13:21.631283 4865 generic.go:334] "Generic (PLEG): container finished" podID="c9294575-8075-48ad-9c05-2e351d1c700a" containerID="4dec5b6ed3fbaff061efa1f794955c0fca85a35fc5f4778f236fe10eb277a800" exitCode=1 Jan 23 12:13:21 crc kubenswrapper[4865]: I0123 12:13:21.631342 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c9b8c4c44-r25v5" event={"ID":"c9294575-8075-48ad-9c05-2e351d1c700a","Type":"ContainerDied","Data":"4dec5b6ed3fbaff061efa1f794955c0fca85a35fc5f4778f236fe10eb277a800"} Jan 23 12:13:21 crc kubenswrapper[4865]: I0123 12:13:21.631958 4865 scope.go:117] "RemoveContainer" containerID="4dec5b6ed3fbaff061efa1f794955c0fca85a35fc5f4778f236fe10eb277a800" Jan 23 12:13:21 crc kubenswrapper[4865]: I0123 12:13:21.634928 4865 generic.go:334] "Generic (PLEG): container finished" podID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" containerID="83f86924c7386555a7914a5dc8dbbfa8a037f17f2bf76581b930c66f22cda642" exitCode=1 Jan 23 12:13:21 crc kubenswrapper[4865]: I0123 12:13:21.634999 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7779444d7-wsm4m" event={"ID":"dbcf892f-3faf-4d86-8ea9-371d6646a64a","Type":"ContainerDied","Data":"83f86924c7386555a7914a5dc8dbbfa8a037f17f2bf76581b930c66f22cda642"} Jan 23 12:13:21 crc kubenswrapper[4865]: I0123 12:13:21.635497 4865 scope.go:117] "RemoveContainer" containerID="83f86924c7386555a7914a5dc8dbbfa8a037f17f2bf76581b930c66f22cda642" Jan 23 12:13:22 crc kubenswrapper[4865]: I0123 12:13:22.129565 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113b112b-608a-4096-bf63-f06706ccc128" path="/var/lib/kubelet/pods/113b112b-608a-4096-bf63-f06706ccc128/volumes" Jan 23 12:13:22 crc kubenswrapper[4865]: I0123 12:13:22.646160 4865 generic.go:334] "Generic (PLEG): container finished" podID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" containerID="0f37a654cc0aa39cbe4e3ef3c9643175e1417f0fe5ef9c24f7b16339baa6b09b" exitCode=1 Jan 23 12:13:22 crc kubenswrapper[4865]: I0123 12:13:22.646213 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7779444d7-wsm4m" event={"ID":"dbcf892f-3faf-4d86-8ea9-371d6646a64a","Type":"ContainerDied","Data":"0f37a654cc0aa39cbe4e3ef3c9643175e1417f0fe5ef9c24f7b16339baa6b09b"} Jan 23 12:13:22 crc kubenswrapper[4865]: I0123 12:13:22.646245 4865 scope.go:117] "RemoveContainer" containerID="83f86924c7386555a7914a5dc8dbbfa8a037f17f2bf76581b930c66f22cda642" Jan 23 12:13:22 crc kubenswrapper[4865]: I0123 12:13:22.646830 4865 scope.go:117] "RemoveContainer" containerID="0f37a654cc0aa39cbe4e3ef3c9643175e1417f0fe5ef9c24f7b16339baa6b09b" Jan 23 12:13:22 crc kubenswrapper[4865]: E0123 12:13:22.647030 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7779444d7-wsm4m_openstack(dbcf892f-3faf-4d86-8ea9-371d6646a64a)\"" pod="openstack/heat-cfnapi-7779444d7-wsm4m" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" Jan 23 12:13:22 crc kubenswrapper[4865]: I0123 12:13:22.650447 4865 generic.go:334] "Generic (PLEG): container finished" podID="c9294575-8075-48ad-9c05-2e351d1c700a" containerID="2fb54669295550208e9eceb50639242114d444da38c9891c108deb2c3b3c6f45" exitCode=1 Jan 23 12:13:22 crc kubenswrapper[4865]: I0123 12:13:22.651144 4865 scope.go:117] "RemoveContainer" containerID="2fb54669295550208e9eceb50639242114d444da38c9891c108deb2c3b3c6f45" Jan 23 12:13:22 crc kubenswrapper[4865]: E0123 12:13:22.651307 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c9b8c4c44-r25v5_openstack(c9294575-8075-48ad-9c05-2e351d1c700a)\"" pod="openstack/heat-api-6c9b8c4c44-r25v5" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" Jan 23 12:13:22 crc kubenswrapper[4865]: I0123 12:13:22.651342 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c9b8c4c44-r25v5" event={"ID":"c9294575-8075-48ad-9c05-2e351d1c700a","Type":"ContainerDied","Data":"2fb54669295550208e9eceb50639242114d444da38c9891c108deb2c3b3c6f45"} Jan 23 12:13:22 crc kubenswrapper[4865]: I0123 12:13:22.730685 4865 scope.go:117] "RemoveContainer" containerID="4dec5b6ed3fbaff061efa1f794955c0fca85a35fc5f4778f236fe10eb277a800" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.262579 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.344783 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-config-data\") pod \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.344831 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ptp9\" (UniqueName: \"kubernetes.io/projected/71a53ffd-0378-43c6-b759-61a7d90a6bdd-kube-api-access-6ptp9\") pod \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.345620 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-scripts\") pod \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.345674 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-run-httpd\") pod \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.345706 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-log-httpd\") pod \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.345745 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-combined-ca-bundle\") pod \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.345786 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-sg-core-conf-yaml\") pod \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\" (UID: \"71a53ffd-0378-43c6-b759-61a7d90a6bdd\") " Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.346147 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "71a53ffd-0378-43c6-b759-61a7d90a6bdd" (UID: "71a53ffd-0378-43c6-b759-61a7d90a6bdd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.346264 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "71a53ffd-0378-43c6-b759-61a7d90a6bdd" (UID: "71a53ffd-0378-43c6-b759-61a7d90a6bdd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.346282 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.354098 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-scripts" (OuterVolumeSpecName: "scripts") pod "71a53ffd-0378-43c6-b759-61a7d90a6bdd" (UID: "71a53ffd-0378-43c6-b759-61a7d90a6bdd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.360955 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a53ffd-0378-43c6-b759-61a7d90a6bdd-kube-api-access-6ptp9" (OuterVolumeSpecName: "kube-api-access-6ptp9") pod "71a53ffd-0378-43c6-b759-61a7d90a6bdd" (UID: "71a53ffd-0378-43c6-b759-61a7d90a6bdd"). InnerVolumeSpecName "kube-api-access-6ptp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.391791 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "71a53ffd-0378-43c6-b759-61a7d90a6bdd" (UID: "71a53ffd-0378-43c6-b759-61a7d90a6bdd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.448713 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ptp9\" (UniqueName: \"kubernetes.io/projected/71a53ffd-0378-43c6-b759-61a7d90a6bdd-kube-api-access-6ptp9\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.448754 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.448767 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/71a53ffd-0378-43c6-b759-61a7d90a6bdd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.448777 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.486752 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-config-data" (OuterVolumeSpecName: "config-data") pod "71a53ffd-0378-43c6-b759-61a7d90a6bdd" (UID: "71a53ffd-0378-43c6-b759-61a7d90a6bdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.506130 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71a53ffd-0378-43c6-b759-61a7d90a6bdd" (UID: "71a53ffd-0378-43c6-b759-61a7d90a6bdd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.518053 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-649fdd8b77-7ffrr"] Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.518263 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-649fdd8b77-7ffrr" podUID="e34e3f6e-75ef-442d-a746-030072cde322" containerName="heat-api" containerID="cri-o://cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584" gracePeriod=60 Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.550135 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.550168 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71a53ffd-0378-43c6-b759-61a7d90a6bdd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.563802 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7c88f697f9-dh6kn"] Jan 23 12:13:23 crc kubenswrapper[4865]: E0123 12:13:23.564189 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="sg-core" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564210 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="sg-core" Jan 23 12:13:23 crc kubenswrapper[4865]: E0123 12:13:23.564222 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="ceilometer-notification-agent" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564229 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="ceilometer-notification-agent" Jan 23 12:13:23 crc kubenswrapper[4865]: E0123 12:13:23.564250 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113b112b-608a-4096-bf63-f06706ccc128" containerName="init" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564257 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="113b112b-608a-4096-bf63-f06706ccc128" containerName="init" Jan 23 12:13:23 crc kubenswrapper[4865]: E0123 12:13:23.564274 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="ceilometer-central-agent" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564283 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="ceilometer-central-agent" Jan 23 12:13:23 crc kubenswrapper[4865]: E0123 12:13:23.564296 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="proxy-httpd" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564302 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="proxy-httpd" Jan 23 12:13:23 crc kubenswrapper[4865]: E0123 12:13:23.564312 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113b112b-608a-4096-bf63-f06706ccc128" containerName="dnsmasq-dns" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564318 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="113b112b-608a-4096-bf63-f06706ccc128" containerName="dnsmasq-dns" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564488 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="ceilometer-notification-agent" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564505 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="proxy-httpd" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564516 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="ceilometer-central-agent" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564527 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerName="sg-core" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.564542 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="113b112b-608a-4096-bf63-f06706ccc128" containerName="dnsmasq-dns" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.565143 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.567733 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.569999 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.589933 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7765485f88-srzk9"] Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.590128 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7765485f88-srzk9" podUID="3d5c794f-d61a-46ab-b696-56f903df1451" containerName="heat-cfnapi" containerID="cri-o://0f1f8c811d386b68987a7f8c898cba546d3c9c471d06b17b6bea971cc50959bb" gracePeriod=60 Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.608533 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c88f697f9-dh6kn"] Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.613088 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-649fdd8b77-7ffrr" podUID="e34e3f6e-75ef-442d-a746-030072cde322" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.180:8004/healthcheck\": EOF" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.654504 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-combined-ca-bundle\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.654759 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tcth\" (UniqueName: \"kubernetes.io/projected/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-kube-api-access-2tcth\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.654788 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-config-data-custom\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.654821 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-public-tls-certs\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.654856 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-internal-tls-certs\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.654889 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-config-data\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.670213 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7765485f88-srzk9" podUID="3d5c794f-d61a-46ab-b696-56f903df1451" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.179:8000/healthcheck\": EOF" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.676987 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6448485bdb-7gws4"] Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.684740 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.692348 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.692622 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.696417 4865 scope.go:117] "RemoveContainer" containerID="2fb54669295550208e9eceb50639242114d444da38c9891c108deb2c3b3c6f45" Jan 23 12:13:23 crc kubenswrapper[4865]: E0123 12:13:23.696651 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c9b8c4c44-r25v5_openstack(c9294575-8075-48ad-9c05-2e351d1c700a)\"" pod="openstack/heat-api-6c9b8c4c44-r25v5" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.701005 4865 generic.go:334] "Generic (PLEG): container finished" podID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" containerID="01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49" exitCode=0 Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.701061 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71a53ffd-0378-43c6-b759-61a7d90a6bdd","Type":"ContainerDied","Data":"01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49"} Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.701087 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"71a53ffd-0378-43c6-b759-61a7d90a6bdd","Type":"ContainerDied","Data":"4c809754647e2c4befea0d2e8c57b42b56c810a6911c9d3bdc8c04ac9369eba7"} Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.701104 4865 scope.go:117] "RemoveContainer" containerID="7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.701184 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.710260 4865 scope.go:117] "RemoveContainer" containerID="0f37a654cc0aa39cbe4e3ef3c9643175e1417f0fe5ef9c24f7b16339baa6b09b" Jan 23 12:13:23 crc kubenswrapper[4865]: E0123 12:13:23.710512 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7779444d7-wsm4m_openstack(dbcf892f-3faf-4d86-8ea9-371d6646a64a)\"" pod="openstack/heat-cfnapi-7779444d7-wsm4m" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.724990 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6448485bdb-7gws4"] Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.737788 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.737839 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.739776 4865 scope.go:117] "RemoveContainer" containerID="a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765068 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-config-data\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765130 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-config-data\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765153 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdwtp\" (UniqueName: \"kubernetes.io/projected/27b930a7-4474-4684-bab3-3df0f6547510-kube-api-access-qdwtp\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765229 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-config-data-custom\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765251 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-internal-tls-certs\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765303 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-combined-ca-bundle\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765380 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-combined-ca-bundle\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765414 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tcth\" (UniqueName: \"kubernetes.io/projected/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-kube-api-access-2tcth\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765459 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-config-data-custom\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765487 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-public-tls-certs\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765551 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-public-tls-certs\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.765575 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-internal-tls-certs\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.793660 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.793810 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.801651 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-public-tls-certs\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.802539 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-config-data-custom\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.805036 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tcth\" (UniqueName: \"kubernetes.io/projected/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-kube-api-access-2tcth\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.805575 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-internal-tls-certs\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.823338 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-combined-ca-bundle\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.827577 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cdfb618-1cd0-4dc4-8b7c-3d635cde8291-config-data\") pod \"heat-api-7c88f697f9-dh6kn\" (UID: \"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291\") " pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.867517 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdwtp\" (UniqueName: \"kubernetes.io/projected/27b930a7-4474-4684-bab3-3df0f6547510-kube-api-access-qdwtp\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.867592 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-config-data-custom\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.867640 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-internal-tls-certs\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.867661 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-combined-ca-bundle\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.867801 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-public-tls-certs\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.867851 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-config-data\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.873148 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-config-data\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.874059 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-config-data-custom\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.879240 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-public-tls-certs\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.880354 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-combined-ca-bundle\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.880909 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/27b930a7-4474-4684-bab3-3df0f6547510-internal-tls-certs\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.890030 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:23 crc kubenswrapper[4865]: I0123 12:13:23.934414 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdwtp\" (UniqueName: \"kubernetes.io/projected/27b930a7-4474-4684-bab3-3df0f6547510-kube-api-access-qdwtp\") pod \"heat-cfnapi-6448485bdb-7gws4\" (UID: \"27b930a7-4474-4684-bab3-3df0f6547510\") " pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.018624 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.041377 4865 scope.go:117] "RemoveContainer" containerID="7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.059901 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.083646 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.184309 4865 scope.go:117] "RemoveContainer" containerID="01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.185014 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a53ffd-0378-43c6-b759-61a7d90a6bdd" path="/var/lib/kubelet/pods/71a53ffd-0378-43c6-b759-61a7d90a6bdd/volumes" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.185973 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.197815 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.197982 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.203003 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.224509 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.284098 4865 scope.go:117] "RemoveContainer" containerID="7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6" Jan 23 12:13:24 crc kubenswrapper[4865]: E0123 12:13:24.294660 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6\": container with ID starting with 7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6 not found: ID does not exist" containerID="7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.294701 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6"} err="failed to get container status \"7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6\": rpc error: code = NotFound desc = could not find container \"7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6\": container with ID starting with 7a3a5cecacf2ad79c41362221c2541208c1d49885afa13108252c56064f32ab6 not found: ID does not exist" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.294726 4865 scope.go:117] "RemoveContainer" containerID="a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679" Jan 23 12:13:24 crc kubenswrapper[4865]: E0123 12:13:24.300079 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679\": container with ID starting with a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679 not found: ID does not exist" containerID="a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.300123 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679"} err="failed to get container status \"a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679\": rpc error: code = NotFound desc = could not find container \"a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679\": container with ID starting with a868112e580d601ea47d6fdd3f4e8bd49eaea5b082dbe22c9b3a6ad2137d7679 not found: ID does not exist" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.300151 4865 scope.go:117] "RemoveContainer" containerID="7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034" Jan 23 12:13:24 crc kubenswrapper[4865]: E0123 12:13:24.306013 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034\": container with ID starting with 7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034 not found: ID does not exist" containerID="7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.306055 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034"} err="failed to get container status \"7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034\": rpc error: code = NotFound desc = could not find container \"7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034\": container with ID starting with 7a1c73acf5d6822f160a1a6dc67d55c1eeac34bf8b87dcfa057f419785bc9034 not found: ID does not exist" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.306081 4865 scope.go:117] "RemoveContainer" containerID="01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49" Jan 23 12:13:24 crc kubenswrapper[4865]: E0123 12:13:24.313179 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49\": container with ID starting with 01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49 not found: ID does not exist" containerID="01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.313215 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49"} err="failed to get container status \"01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49\": rpc error: code = NotFound desc = could not find container \"01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49\": container with ID starting with 01d756a33d46dba78312b31942273150a8b37ef861ce34018ca9e35b1ab76d49 not found: ID does not exist" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.386814 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="00b1558f-6054-43bb-82a7-329436ce1a0b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.175:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.398882 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-log-httpd\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.398965 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.399030 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-config-data\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.399051 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-scripts\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.399079 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt7ts\" (UniqueName: \"kubernetes.io/projected/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-kube-api-access-gt7ts\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.399099 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-run-httpd\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.399127 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.500841 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.501192 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-log-httpd\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.501247 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.501309 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-config-data\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.501329 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-scripts\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.501355 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt7ts\" (UniqueName: \"kubernetes.io/projected/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-kube-api-access-gt7ts\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.501377 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-run-httpd\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.502006 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-run-httpd\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.503774 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-log-httpd\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.512365 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.514049 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-scripts\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.521945 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-config-data\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.525951 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.533573 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt7ts\" (UniqueName: \"kubernetes.io/projected/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-kube-api-access-gt7ts\") pod \"ceilometer-0\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.553703 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.653654 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7c88f697f9-dh6kn"] Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.658420 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.741477 4865 scope.go:117] "RemoveContainer" containerID="2fb54669295550208e9eceb50639242114d444da38c9891c108deb2c3b3c6f45" Jan 23 12:13:24 crc kubenswrapper[4865]: E0123 12:13:24.741742 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c9b8c4c44-r25v5_openstack(c9294575-8075-48ad-9c05-2e351d1c700a)\"" pod="openstack/heat-api-6c9b8c4c44-r25v5" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.742005 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c88f697f9-dh6kn" event={"ID":"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291","Type":"ContainerStarted","Data":"8a502b08e08b62795be28674ab9ae96707f7a405d363b387ae319051760d2a95"} Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.742296 4865 scope.go:117] "RemoveContainer" containerID="0f37a654cc0aa39cbe4e3ef3c9643175e1417f0fe5ef9c24f7b16339baa6b09b" Jan 23 12:13:24 crc kubenswrapper[4865]: E0123 12:13:24.742462 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7779444d7-wsm4m_openstack(dbcf892f-3faf-4d86-8ea9-371d6646a64a)\"" pod="openstack/heat-cfnapi-7779444d7-wsm4m" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" Jan 23 12:13:24 crc kubenswrapper[4865]: I0123 12:13:24.977061 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6448485bdb-7gws4"] Jan 23 12:13:25 crc kubenswrapper[4865]: I0123 12:13:25.359786 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:25 crc kubenswrapper[4865]: I0123 12:13:25.758176 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6448485bdb-7gws4" event={"ID":"27b930a7-4474-4684-bab3-3df0f6547510","Type":"ContainerStarted","Data":"695b57c351e6c015763a8fc2515a6115b3ef1b86d70765ec72997c2b73c3ed43"} Jan 23 12:13:25 crc kubenswrapper[4865]: I0123 12:13:25.767863 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9d3ade9-0371-4c2b-a038-4f7677fca3c8","Type":"ContainerStarted","Data":"f94bc553373a4c39af5434db30976eace6b4e20b781c9c185dd49a7ea99989f8"} Jan 23 12:13:25 crc kubenswrapper[4865]: I0123 12:13:25.790574 4865 scope.go:117] "RemoveContainer" containerID="0f37a654cc0aa39cbe4e3ef3c9643175e1417f0fe5ef9c24f7b16339baa6b09b" Jan 23 12:13:25 crc kubenswrapper[4865]: I0123 12:13:25.790843 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7c88f697f9-dh6kn" event={"ID":"5cdfb618-1cd0-4dc4-8b7c-3d635cde8291","Type":"ContainerStarted","Data":"5b2871a12cd75baa48722cfa92c46a2612a9c312e33e8465621126e769f5a0e1"} Jan 23 12:13:25 crc kubenswrapper[4865]: E0123 12:13:25.791082 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7779444d7-wsm4m_openstack(dbcf892f-3faf-4d86-8ea9-371d6646a64a)\"" pod="openstack/heat-cfnapi-7779444d7-wsm4m" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" Jan 23 12:13:25 crc kubenswrapper[4865]: I0123 12:13:25.791630 4865 scope.go:117] "RemoveContainer" containerID="2fb54669295550208e9eceb50639242114d444da38c9891c108deb2c3b3c6f45" Jan 23 12:13:25 crc kubenswrapper[4865]: E0123 12:13:25.791780 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6c9b8c4c44-r25v5_openstack(c9294575-8075-48ad-9c05-2e351d1c700a)\"" pod="openstack/heat-api-6c9b8c4c44-r25v5" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" Jan 23 12:13:25 crc kubenswrapper[4865]: I0123 12:13:25.791821 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:25 crc kubenswrapper[4865]: I0123 12:13:25.825161 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7c88f697f9-dh6kn" podStartSLOduration=2.8251078080000003 podStartE2EDuration="2.825107808s" podCreationTimestamp="2026-01-23 12:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:25.812970394 +0000 UTC m=+1249.982042620" watchObservedRunningTime="2026-01-23 12:13:25.825107808 +0000 UTC m=+1249.994180034" Jan 23 12:13:26 crc kubenswrapper[4865]: I0123 12:13:26.801493 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6448485bdb-7gws4" event={"ID":"27b930a7-4474-4684-bab3-3df0f6547510","Type":"ContainerStarted","Data":"d7dba6341c05d832312a8ff0d2b46648194d200c96c2dad8b1353359dac65105"} Jan 23 12:13:26 crc kubenswrapper[4865]: I0123 12:13:26.802001 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:26 crc kubenswrapper[4865]: I0123 12:13:26.807243 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9d3ade9-0371-4c2b-a038-4f7677fca3c8","Type":"ContainerStarted","Data":"9dc6decb5655961132b32562a08d5a1abba021b417cefcdf705a74dba49340da"} Jan 23 12:13:26 crc kubenswrapper[4865]: I0123 12:13:26.829080 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6448485bdb-7gws4" podStartSLOduration=3.829055512 podStartE2EDuration="3.829055512s" podCreationTimestamp="2026-01-23 12:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:26.826816077 +0000 UTC m=+1250.995888303" watchObservedRunningTime="2026-01-23 12:13:26.829055512 +0000 UTC m=+1250.998127738" Jan 23 12:13:27 crc kubenswrapper[4865]: I0123 12:13:27.826916 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9d3ade9-0371-4c2b-a038-4f7677fca3c8","Type":"ContainerStarted","Data":"fb2a20186f3fff22d32bac009ff3eaae2ec790c1546606d2bcf96bb9fa6ff7a9"} Jan 23 12:13:27 crc kubenswrapper[4865]: I0123 12:13:27.827227 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9d3ade9-0371-4c2b-a038-4f7677fca3c8","Type":"ContainerStarted","Data":"e3210ac7d5cb77e1f1dd35b8a876cffd128a264256fbf9fa85b095822a33a657"} Jan 23 12:13:28 crc kubenswrapper[4865]: I0123 12:13:28.984199 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:29 crc kubenswrapper[4865]: I0123 12:13:29.845016 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9d3ade9-0371-4c2b-a038-4f7677fca3c8","Type":"ContainerStarted","Data":"4f38abdaff1c96e1a8a0e2c817f82e9215edcea1a882669ff2e5831a01644ed4"} Jan 23 12:13:29 crc kubenswrapper[4865]: I0123 12:13:29.845670 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 12:13:29 crc kubenswrapper[4865]: I0123 12:13:29.887529 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.603152358 podStartE2EDuration="5.887510111s" podCreationTimestamp="2026-01-23 12:13:24 +0000 UTC" firstStartedPulling="2026-01-23 12:13:25.359857664 +0000 UTC m=+1249.528929890" lastFinishedPulling="2026-01-23 12:13:28.644215417 +0000 UTC m=+1252.813287643" observedRunningTime="2026-01-23 12:13:29.876392432 +0000 UTC m=+1254.045464658" watchObservedRunningTime="2026-01-23 12:13:29.887510111 +0000 UTC m=+1254.056582337" Jan 23 12:13:31 crc kubenswrapper[4865]: I0123 12:13:31.397776 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7765485f88-srzk9" podUID="3d5c794f-d61a-46ab-b696-56f903df1451" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.179:8000/healthcheck\": read tcp 10.217.0.2:59184->10.217.0.179:8000: read: connection reset by peer" Jan 23 12:13:31 crc kubenswrapper[4865]: I0123 12:13:31.398523 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7765485f88-srzk9" podUID="3d5c794f-d61a-46ab-b696-56f903df1451" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.179:8000/healthcheck\": dial tcp 10.217.0.179:8000: connect: connection refused" Jan 23 12:13:31 crc kubenswrapper[4865]: I0123 12:13:31.862846 4865 generic.go:334] "Generic (PLEG): container finished" podID="3d5c794f-d61a-46ab-b696-56f903df1451" containerID="0f1f8c811d386b68987a7f8c898cba546d3c9c471d06b17b6bea971cc50959bb" exitCode=0 Jan 23 12:13:31 crc kubenswrapper[4865]: I0123 12:13:31.862887 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7765485f88-srzk9" event={"ID":"3d5c794f-d61a-46ab-b696-56f903df1451","Type":"ContainerDied","Data":"0f1f8c811d386b68987a7f8c898cba546d3c9c471d06b17b6bea971cc50959bb"} Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.046701 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-649fdd8b77-7ffrr" podUID="e34e3f6e-75ef-442d-a746-030072cde322" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.180:8004/healthcheck\": read tcp 10.217.0.2:54990->10.217.0.180:8004: read: connection reset by peer" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.047512 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-649fdd8b77-7ffrr" podUID="e34e3f6e-75ef-442d-a746-030072cde322" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.180:8004/healthcheck\": dial tcp 10.217.0.180:8004: connect: connection refused" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.413804 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.511012 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data\") pod \"3d5c794f-d61a-46ab-b696-56f903df1451\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.511459 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbg6s\" (UniqueName: \"kubernetes.io/projected/3d5c794f-d61a-46ab-b696-56f903df1451-kube-api-access-lbg6s\") pod \"3d5c794f-d61a-46ab-b696-56f903df1451\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.511521 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-combined-ca-bundle\") pod \"3d5c794f-d61a-46ab-b696-56f903df1451\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.511663 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data-custom\") pod \"3d5c794f-d61a-46ab-b696-56f903df1451\" (UID: \"3d5c794f-d61a-46ab-b696-56f903df1451\") " Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.551631 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3d5c794f-d61a-46ab-b696-56f903df1451" (UID: "3d5c794f-d61a-46ab-b696-56f903df1451"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.579882 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5c794f-d61a-46ab-b696-56f903df1451-kube-api-access-lbg6s" (OuterVolumeSpecName: "kube-api-access-lbg6s") pod "3d5c794f-d61a-46ab-b696-56f903df1451" (UID: "3d5c794f-d61a-46ab-b696-56f903df1451"). InnerVolumeSpecName "kube-api-access-lbg6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.615799 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.615833 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbg6s\" (UniqueName: \"kubernetes.io/projected/3d5c794f-d61a-46ab-b696-56f903df1451-kube-api-access-lbg6s\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.639756 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d5c794f-d61a-46ab-b696-56f903df1451" (UID: "3d5c794f-d61a-46ab-b696-56f903df1451"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.664459 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data" (OuterVolumeSpecName: "config-data") pod "3d5c794f-d61a-46ab-b696-56f903df1451" (UID: "3d5c794f-d61a-46ab-b696-56f903df1451"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.721884 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.721923 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d5c794f-d61a-46ab-b696-56f903df1451-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.742087 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.823444 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data\") pod \"e34e3f6e-75ef-442d-a746-030072cde322\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.823862 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data-custom\") pod \"e34e3f6e-75ef-442d-a746-030072cde322\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.824154 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6hs2\" (UniqueName: \"kubernetes.io/projected/e34e3f6e-75ef-442d-a746-030072cde322-kube-api-access-b6hs2\") pod \"e34e3f6e-75ef-442d-a746-030072cde322\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.824283 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-combined-ca-bundle\") pod \"e34e3f6e-75ef-442d-a746-030072cde322\" (UID: \"e34e3f6e-75ef-442d-a746-030072cde322\") " Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.837973 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e34e3f6e-75ef-442d-a746-030072cde322" (UID: "e34e3f6e-75ef-442d-a746-030072cde322"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.841799 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e34e3f6e-75ef-442d-a746-030072cde322-kube-api-access-b6hs2" (OuterVolumeSpecName: "kube-api-access-b6hs2") pod "e34e3f6e-75ef-442d-a746-030072cde322" (UID: "e34e3f6e-75ef-442d-a746-030072cde322"). InnerVolumeSpecName "kube-api-access-b6hs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.858976 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e34e3f6e-75ef-442d-a746-030072cde322" (UID: "e34e3f6e-75ef-442d-a746-030072cde322"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.886610 4865 generic.go:334] "Generic (PLEG): container finished" podID="e34e3f6e-75ef-442d-a746-030072cde322" containerID="cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584" exitCode=0 Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.886733 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-649fdd8b77-7ffrr" event={"ID":"e34e3f6e-75ef-442d-a746-030072cde322","Type":"ContainerDied","Data":"cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584"} Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.886764 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-649fdd8b77-7ffrr" event={"ID":"e34e3f6e-75ef-442d-a746-030072cde322","Type":"ContainerDied","Data":"22f03eff6ebcd64e46488cfa97e11cbcf2c7e0f6d3dc628fa9e6c0b4ec71e9b1"} Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.886782 4865 scope.go:117] "RemoveContainer" containerID="cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.886897 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-649fdd8b77-7ffrr" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.893471 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7765485f88-srzk9" event={"ID":"3d5c794f-d61a-46ab-b696-56f903df1451","Type":"ContainerDied","Data":"3c7334385a0fda36a082c6126e0609cac8bddeb8e3ca516bdab65c2cfae0ea02"} Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.893559 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7765485f88-srzk9" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.904845 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data" (OuterVolumeSpecName: "config-data") pod "e34e3f6e-75ef-442d-a746-030072cde322" (UID: "e34e3f6e-75ef-442d-a746-030072cde322"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.927042 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6hs2\" (UniqueName: \"kubernetes.io/projected/e34e3f6e-75ef-442d-a746-030072cde322-kube-api-access-b6hs2\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.927074 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.927083 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.927092 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e34e3f6e-75ef-442d-a746-030072cde322-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.951342 4865 scope.go:117] "RemoveContainer" containerID="cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.951544 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7765485f88-srzk9"] Jan 23 12:13:32 crc kubenswrapper[4865]: E0123 12:13:32.952991 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584\": container with ID starting with cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584 not found: ID does not exist" containerID="cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.953054 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584"} err="failed to get container status \"cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584\": rpc error: code = NotFound desc = could not find container \"cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584\": container with ID starting with cb313d15194596509b82b45b71cc16d3709fc5f6db2c06cdef68be9c20c2d584 not found: ID does not exist" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.953089 4865 scope.go:117] "RemoveContainer" containerID="0f1f8c811d386b68987a7f8c898cba546d3c9c471d06b17b6bea971cc50959bb" Jan 23 12:13:32 crc kubenswrapper[4865]: I0123 12:13:32.959465 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7765485f88-srzk9"] Jan 23 12:13:33 crc kubenswrapper[4865]: I0123 12:13:33.227969 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-649fdd8b77-7ffrr"] Jan 23 12:13:33 crc kubenswrapper[4865]: I0123 12:13:33.234531 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-649fdd8b77-7ffrr"] Jan 23 12:13:33 crc kubenswrapper[4865]: I0123 12:13:33.824067 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:33 crc kubenswrapper[4865]: I0123 12:13:33.824689 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="ceilometer-central-agent" containerID="cri-o://9dc6decb5655961132b32562a08d5a1abba021b417cefcdf705a74dba49340da" gracePeriod=30 Jan 23 12:13:33 crc kubenswrapper[4865]: I0123 12:13:33.824721 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="sg-core" containerID="cri-o://e3210ac7d5cb77e1f1dd35b8a876cffd128a264256fbf9fa85b095822a33a657" gracePeriod=30 Jan 23 12:13:33 crc kubenswrapper[4865]: I0123 12:13:33.824791 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="proxy-httpd" containerID="cri-o://4f38abdaff1c96e1a8a0e2c817f82e9215edcea1a882669ff2e5831a01644ed4" gracePeriod=30 Jan 23 12:13:33 crc kubenswrapper[4865]: I0123 12:13:33.824827 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="ceilometer-notification-agent" containerID="cri-o://fb2a20186f3fff22d32bac009ff3eaae2ec790c1546606d2bcf96bb9fa6ff7a9" gracePeriod=30 Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.129195 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5c794f-d61a-46ab-b696-56f903df1451" path="/var/lib/kubelet/pods/3d5c794f-d61a-46ab-b696-56f903df1451/volumes" Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.129884 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e34e3f6e-75ef-442d-a746-030072cde322" path="/var/lib/kubelet/pods/e34e3f6e-75ef-442d-a746-030072cde322/volumes" Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.917826 4865 generic.go:334] "Generic (PLEG): container finished" podID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerID="4f38abdaff1c96e1a8a0e2c817f82e9215edcea1a882669ff2e5831a01644ed4" exitCode=0 Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.918146 4865 generic.go:334] "Generic (PLEG): container finished" podID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerID="e3210ac7d5cb77e1f1dd35b8a876cffd128a264256fbf9fa85b095822a33a657" exitCode=2 Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.918156 4865 generic.go:334] "Generic (PLEG): container finished" podID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerID="fb2a20186f3fff22d32bac009ff3eaae2ec790c1546606d2bcf96bb9fa6ff7a9" exitCode=0 Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.917918 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9d3ade9-0371-4c2b-a038-4f7677fca3c8","Type":"ContainerDied","Data":"4f38abdaff1c96e1a8a0e2c817f82e9215edcea1a882669ff2e5831a01644ed4"} Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.918225 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9d3ade9-0371-4c2b-a038-4f7677fca3c8","Type":"ContainerDied","Data":"e3210ac7d5cb77e1f1dd35b8a876cffd128a264256fbf9fa85b095822a33a657"} Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.918241 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9d3ade9-0371-4c2b-a038-4f7677fca3c8","Type":"ContainerDied","Data":"fb2a20186f3fff22d32bac009ff3eaae2ec790c1546606d2bcf96bb9fa6ff7a9"} Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.920840 4865 generic.go:334] "Generic (PLEG): container finished" podID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerID="fecf4f76049da39ba53062cbfb6e4bcdfc58676fe141eea30a24d30464ca2daf" exitCode=137 Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.920868 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerDied","Data":"fecf4f76049da39ba53062cbfb6e4bcdfc58676fe141eea30a24d30464ca2daf"} Jan 23 12:13:34 crc kubenswrapper[4865]: I0123 12:13:34.920895 4865 scope.go:117] "RemoveContainer" containerID="ad0bd0b06faa3989d6d91f836137fa93ac3878b4dcf0b308bb72332eb709161b" Jan 23 12:13:35 crc kubenswrapper[4865]: I0123 12:13:35.932989 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerStarted","Data":"773f43c7fc3b7b930164eb5a9391098dcc2b2866277b673db9ee6522e2b623e3"} Jan 23 12:13:36 crc kubenswrapper[4865]: I0123 12:13:36.289082 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7c88f697f9-dh6kn" Jan 23 12:13:36 crc kubenswrapper[4865]: I0123 12:13:36.362330 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c9b8c4c44-r25v5"] Jan 23 12:13:36 crc kubenswrapper[4865]: I0123 12:13:36.631926 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6448485bdb-7gws4" Jan 23 12:13:36 crc kubenswrapper[4865]: I0123 12:13:36.723863 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7779444d7-wsm4m"] Jan 23 12:13:36 crc kubenswrapper[4865]: I0123 12:13:36.981656 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c9b8c4c44-r25v5" event={"ID":"c9294575-8075-48ad-9c05-2e351d1c700a","Type":"ContainerDied","Data":"948a31ccbeb7bbb60a38f0bf6e3c489e6a97c8a954e3fbfa7539d63955ab9040"} Jan 23 12:13:36 crc kubenswrapper[4865]: I0123 12:13:36.981724 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="948a31ccbeb7bbb60a38f0bf6e3c489e6a97c8a954e3fbfa7539d63955ab9040" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.001804 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.128462 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data-custom\") pod \"c9294575-8075-48ad-9c05-2e351d1c700a\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.128800 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-combined-ca-bundle\") pod \"c9294575-8075-48ad-9c05-2e351d1c700a\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.128924 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd8zl\" (UniqueName: \"kubernetes.io/projected/c9294575-8075-48ad-9c05-2e351d1c700a-kube-api-access-vd8zl\") pod \"c9294575-8075-48ad-9c05-2e351d1c700a\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.128956 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data\") pod \"c9294575-8075-48ad-9c05-2e351d1c700a\" (UID: \"c9294575-8075-48ad-9c05-2e351d1c700a\") " Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.202778 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9294575-8075-48ad-9c05-2e351d1c700a-kube-api-access-vd8zl" (OuterVolumeSpecName: "kube-api-access-vd8zl") pod "c9294575-8075-48ad-9c05-2e351d1c700a" (UID: "c9294575-8075-48ad-9c05-2e351d1c700a"). InnerVolumeSpecName "kube-api-access-vd8zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.209739 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c9294575-8075-48ad-9c05-2e351d1c700a" (UID: "c9294575-8075-48ad-9c05-2e351d1c700a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.230849 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.230879 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd8zl\" (UniqueName: \"kubernetes.io/projected/c9294575-8075-48ad-9c05-2e351d1c700a-kube-api-access-vd8zl\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.403004 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9294575-8075-48ad-9c05-2e351d1c700a" (UID: "c9294575-8075-48ad-9c05-2e351d1c700a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.437807 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.454128 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.459881 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data" (OuterVolumeSpecName: "config-data") pod "c9294575-8075-48ad-9c05-2e351d1c700a" (UID: "c9294575-8075-48ad-9c05-2e351d1c700a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.539436 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-combined-ca-bundle\") pod \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.539638 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs8hm\" (UniqueName: \"kubernetes.io/projected/dbcf892f-3faf-4d86-8ea9-371d6646a64a-kube-api-access-qs8hm\") pod \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.539720 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data-custom\") pod \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.539855 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data\") pod \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\" (UID: \"dbcf892f-3faf-4d86-8ea9-371d6646a64a\") " Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.540361 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9294575-8075-48ad-9c05-2e351d1c700a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.545479 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dbcf892f-3faf-4d86-8ea9-371d6646a64a" (UID: "dbcf892f-3faf-4d86-8ea9-371d6646a64a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.547775 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcf892f-3faf-4d86-8ea9-371d6646a64a-kube-api-access-qs8hm" (OuterVolumeSpecName: "kube-api-access-qs8hm") pod "dbcf892f-3faf-4d86-8ea9-371d6646a64a" (UID: "dbcf892f-3faf-4d86-8ea9-371d6646a64a"). InnerVolumeSpecName "kube-api-access-qs8hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.565975 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbcf892f-3faf-4d86-8ea9-371d6646a64a" (UID: "dbcf892f-3faf-4d86-8ea9-371d6646a64a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.597249 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data" (OuterVolumeSpecName: "config-data") pod "dbcf892f-3faf-4d86-8ea9-371d6646a64a" (UID: "dbcf892f-3faf-4d86-8ea9-371d6646a64a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.641653 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.641693 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs8hm\" (UniqueName: \"kubernetes.io/projected/dbcf892f-3faf-4d86-8ea9-371d6646a64a-kube-api-access-qs8hm\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.641704 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.641716 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcf892f-3faf-4d86-8ea9-371d6646a64a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.990023 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c9b8c4c44-r25v5" Jan 23 12:13:37 crc kubenswrapper[4865]: I0123 12:13:37.999097 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7779444d7-wsm4m" Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.007762 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7779444d7-wsm4m" event={"ID":"dbcf892f-3faf-4d86-8ea9-371d6646a64a","Type":"ContainerDied","Data":"f8cbd6b0a46c534dfd93a01e60a674e4bc98b2623d4a085949ad937e5c647f6f"} Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.007809 4865 scope.go:117] "RemoveContainer" containerID="0f37a654cc0aa39cbe4e3ef3c9643175e1417f0fe5ef9c24f7b16339baa6b09b" Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.053535 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c9b8c4c44-r25v5"] Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.069188 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6c9b8c4c44-r25v5"] Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.072334 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7779444d7-wsm4m"] Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.079949 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7779444d7-wsm4m"] Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.151582 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" path="/var/lib/kubelet/pods/c9294575-8075-48ad-9c05-2e351d1c700a/volumes" Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.153271 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" path="/var/lib/kubelet/pods/dbcf892f-3faf-4d86-8ea9-371d6646a64a/volumes" Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.533678 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.536044 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerName="glance-httpd" containerID="cri-o://894e90243cbb2803719bc08e23ac7ff919f356abcb7852d076b6e090eca33fce" gracePeriod=30 Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.536008 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerName="glance-log" containerID="cri-o://e8ae1337416c44ff566331434359686312e110501cf8946a2f15b5443c64207e" gracePeriod=30 Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.712660 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5ccd964666-7jplv" Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.767177 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-696547d8cb-9scxl"] Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.767436 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-696547d8cb-9scxl" podUID="8de820a5-b4df-48bb-aa66-756fe92e787d" containerName="heat-engine" containerID="cri-o://191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106" gracePeriod=60 Jan 23 12:13:38 crc kubenswrapper[4865]: E0123 12:13:38.954860 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 12:13:38 crc kubenswrapper[4865]: E0123 12:13:38.956333 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 12:13:38 crc kubenswrapper[4865]: E0123 12:13:38.963945 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 12:13:38 crc kubenswrapper[4865]: E0123 12:13:38.964025 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-696547d8cb-9scxl" podUID="8de820a5-b4df-48bb-aa66-756fe92e787d" containerName="heat-engine" Jan 23 12:13:38 crc kubenswrapper[4865]: I0123 12:13:38.999881 4865 generic.go:334] "Generic (PLEG): container finished" podID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerID="e8ae1337416c44ff566331434359686312e110501cf8946a2f15b5443c64207e" exitCode=143 Jan 23 12:13:39 crc kubenswrapper[4865]: I0123 12:13:38.999986 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eec67cc0-b9ae-4767-82b9-ffa764ab07d0","Type":"ContainerDied","Data":"e8ae1337416c44ff566331434359686312e110501cf8946a2f15b5443c64207e"} Jan 23 12:13:41 crc kubenswrapper[4865]: I0123 12:13:41.992682 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.156:9292/healthcheck\": dial tcp 10.217.0.156:9292: connect: connection refused" Jan 23 12:13:41 crc kubenswrapper[4865]: I0123 12:13:41.992682 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.156:9292/healthcheck\": dial tcp 10.217.0.156:9292: connect: connection refused" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.038165 4865 generic.go:334] "Generic (PLEG): container finished" podID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerID="894e90243cbb2803719bc08e23ac7ff919f356abcb7852d076b6e090eca33fce" exitCode=0 Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.038230 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eec67cc0-b9ae-4767-82b9-ffa764ab07d0","Type":"ContainerDied","Data":"894e90243cbb2803719bc08e23ac7ff919f356abcb7852d076b6e090eca33fce"} Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.413219 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.413513 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dc78e553-ea01-4581-b947-c4cff5f2ba13" containerName="glance-log" containerID="cri-o://7c5512fb92748e26a079515280f375e6f2b357d6837ba7f5d52f4e787ff04d46" gracePeriod=30 Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.413576 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dc78e553-ea01-4581-b947-c4cff5f2ba13" containerName="glance-httpd" containerID="cri-o://b99b9fe2bb82811b02471d95a3a65c8db93f84602ed05c7a379db454b7c48f4e" gracePeriod=30 Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449109 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-l8x7m"] Jan 23 12:13:43 crc kubenswrapper[4865]: E0123 12:13:43.449456 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e34e3f6e-75ef-442d-a746-030072cde322" containerName="heat-api" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449473 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e34e3f6e-75ef-442d-a746-030072cde322" containerName="heat-api" Jan 23 12:13:43 crc kubenswrapper[4865]: E0123 12:13:43.449490 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" containerName="heat-cfnapi" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449496 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" containerName="heat-cfnapi" Jan 23 12:13:43 crc kubenswrapper[4865]: E0123 12:13:43.449510 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" containerName="heat-api" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449516 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" containerName="heat-api" Jan 23 12:13:43 crc kubenswrapper[4865]: E0123 12:13:43.449526 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5c794f-d61a-46ab-b696-56f903df1451" containerName="heat-cfnapi" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449531 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5c794f-d61a-46ab-b696-56f903df1451" containerName="heat-cfnapi" Jan 23 12:13:43 crc kubenswrapper[4865]: E0123 12:13:43.449540 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" containerName="heat-api" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449545 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" containerName="heat-api" Jan 23 12:13:43 crc kubenswrapper[4865]: E0123 12:13:43.449568 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" containerName="heat-cfnapi" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449574 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" containerName="heat-cfnapi" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449733 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" containerName="heat-cfnapi" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449748 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" containerName="heat-api" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449766 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5c794f-d61a-46ab-b696-56f903df1451" containerName="heat-cfnapi" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449777 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e34e3f6e-75ef-442d-a746-030072cde322" containerName="heat-api" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.449786 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9294575-8075-48ad-9c05-2e351d1c700a" containerName="heat-api" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.450333 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l8x7m" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.486226 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-l8x7m"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.547988 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c20d774-960c-4422-8fa5-2cdc2a6806fe-operator-scripts\") pod \"nova-api-db-create-l8x7m\" (UID: \"4c20d774-960c-4422-8fa5-2cdc2a6806fe\") " pod="openstack/nova-api-db-create-l8x7m" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.548088 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4g9n\" (UniqueName: \"kubernetes.io/projected/4c20d774-960c-4422-8fa5-2cdc2a6806fe-kube-api-access-p4g9n\") pod \"nova-api-db-create-l8x7m\" (UID: \"4c20d774-960c-4422-8fa5-2cdc2a6806fe\") " pod="openstack/nova-api-db-create-l8x7m" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.555867 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jbdnl"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.556420 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcf892f-3faf-4d86-8ea9-371d6646a64a" containerName="heat-cfnapi" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.557034 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jbdnl" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.586905 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jbdnl"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.626866 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-15f5-account-create-update-bxfkf"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.628049 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-15f5-account-create-update-bxfkf" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.630489 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.649912 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqk4w\" (UniqueName: \"kubernetes.io/projected/c0cf61d9-b096-4cea-a55e-e54176112f74-kube-api-access-sqk4w\") pod \"nova-cell0-db-create-jbdnl\" (UID: \"c0cf61d9-b096-4cea-a55e-e54176112f74\") " pod="openstack/nova-cell0-db-create-jbdnl" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.649952 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwl8f\" (UniqueName: \"kubernetes.io/projected/e2b9cf60-06e6-462b-a56d-067f710d4efb-kube-api-access-rwl8f\") pod \"nova-api-15f5-account-create-update-bxfkf\" (UID: \"e2b9cf60-06e6-462b-a56d-067f710d4efb\") " pod="openstack/nova-api-15f5-account-create-update-bxfkf" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.649995 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cf61d9-b096-4cea-a55e-e54176112f74-operator-scripts\") pod \"nova-cell0-db-create-jbdnl\" (UID: \"c0cf61d9-b096-4cea-a55e-e54176112f74\") " pod="openstack/nova-cell0-db-create-jbdnl" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.650052 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c20d774-960c-4422-8fa5-2cdc2a6806fe-operator-scripts\") pod \"nova-api-db-create-l8x7m\" (UID: \"4c20d774-960c-4422-8fa5-2cdc2a6806fe\") " pod="openstack/nova-api-db-create-l8x7m" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.650069 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b9cf60-06e6-462b-a56d-067f710d4efb-operator-scripts\") pod \"nova-api-15f5-account-create-update-bxfkf\" (UID: \"e2b9cf60-06e6-462b-a56d-067f710d4efb\") " pod="openstack/nova-api-15f5-account-create-update-bxfkf" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.650122 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4g9n\" (UniqueName: \"kubernetes.io/projected/4c20d774-960c-4422-8fa5-2cdc2a6806fe-kube-api-access-p4g9n\") pod \"nova-api-db-create-l8x7m\" (UID: \"4c20d774-960c-4422-8fa5-2cdc2a6806fe\") " pod="openstack/nova-api-db-create-l8x7m" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.650978 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c20d774-960c-4422-8fa5-2cdc2a6806fe-operator-scripts\") pod \"nova-api-db-create-l8x7m\" (UID: \"4c20d774-960c-4422-8fa5-2cdc2a6806fe\") " pod="openstack/nova-api-db-create-l8x7m" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.658653 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-15f5-account-create-update-bxfkf"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.707515 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4g9n\" (UniqueName: \"kubernetes.io/projected/4c20d774-960c-4422-8fa5-2cdc2a6806fe-kube-api-access-p4g9n\") pod \"nova-api-db-create-l8x7m\" (UID: \"4c20d774-960c-4422-8fa5-2cdc2a6806fe\") " pod="openstack/nova-api-db-create-l8x7m" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.744324 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fdtph"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.751418 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b9cf60-06e6-462b-a56d-067f710d4efb-operator-scripts\") pod \"nova-api-15f5-account-create-update-bxfkf\" (UID: \"e2b9cf60-06e6-462b-a56d-067f710d4efb\") " pod="openstack/nova-api-15f5-account-create-update-bxfkf" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.751520 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqk4w\" (UniqueName: \"kubernetes.io/projected/c0cf61d9-b096-4cea-a55e-e54176112f74-kube-api-access-sqk4w\") pod \"nova-cell0-db-create-jbdnl\" (UID: \"c0cf61d9-b096-4cea-a55e-e54176112f74\") " pod="openstack/nova-cell0-db-create-jbdnl" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.751547 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwl8f\" (UniqueName: \"kubernetes.io/projected/e2b9cf60-06e6-462b-a56d-067f710d4efb-kube-api-access-rwl8f\") pod \"nova-api-15f5-account-create-update-bxfkf\" (UID: \"e2b9cf60-06e6-462b-a56d-067f710d4efb\") " pod="openstack/nova-api-15f5-account-create-update-bxfkf" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.751587 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cf61d9-b096-4cea-a55e-e54176112f74-operator-scripts\") pod \"nova-cell0-db-create-jbdnl\" (UID: \"c0cf61d9-b096-4cea-a55e-e54176112f74\") " pod="openstack/nova-cell0-db-create-jbdnl" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.752266 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cf61d9-b096-4cea-a55e-e54176112f74-operator-scripts\") pod \"nova-cell0-db-create-jbdnl\" (UID: \"c0cf61d9-b096-4cea-a55e-e54176112f74\") " pod="openstack/nova-cell0-db-create-jbdnl" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.752740 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b9cf60-06e6-462b-a56d-067f710d4efb-operator-scripts\") pod \"nova-api-15f5-account-create-update-bxfkf\" (UID: \"e2b9cf60-06e6-462b-a56d-067f710d4efb\") " pod="openstack/nova-api-15f5-account-create-update-bxfkf" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.754805 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdtph" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.763757 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fdtph"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.770268 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l8x7m" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.779508 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqk4w\" (UniqueName: \"kubernetes.io/projected/c0cf61d9-b096-4cea-a55e-e54176112f74-kube-api-access-sqk4w\") pod \"nova-cell0-db-create-jbdnl\" (UID: \"c0cf61d9-b096-4cea-a55e-e54176112f74\") " pod="openstack/nova-cell0-db-create-jbdnl" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.826105 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwl8f\" (UniqueName: \"kubernetes.io/projected/e2b9cf60-06e6-462b-a56d-067f710d4efb-kube-api-access-rwl8f\") pod \"nova-api-15f5-account-create-update-bxfkf\" (UID: \"e2b9cf60-06e6-462b-a56d-067f710d4efb\") " pod="openstack/nova-api-15f5-account-create-update-bxfkf" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.853121 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-operator-scripts\") pod \"nova-cell1-db-create-fdtph\" (UID: \"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad\") " pod="openstack/nova-cell1-db-create-fdtph" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.853239 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btt5p\" (UniqueName: \"kubernetes.io/projected/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-kube-api-access-btt5p\") pod \"nova-cell1-db-create-fdtph\" (UID: \"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad\") " pod="openstack/nova-cell1-db-create-fdtph" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.895785 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-42dc-account-create-update-xcdlv"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.900844 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jbdnl" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.903333 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.906192 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.932542 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-42dc-account-create-update-xcdlv"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.964860 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-operator-scripts\") pod \"nova-cell1-db-create-fdtph\" (UID: \"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad\") " pod="openstack/nova-cell1-db-create-fdtph" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.965037 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btt5p\" (UniqueName: \"kubernetes.io/projected/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-kube-api-access-btt5p\") pod \"nova-cell1-db-create-fdtph\" (UID: \"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad\") " pod="openstack/nova-cell1-db-create-fdtph" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.965474 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-15f5-account-create-update-bxfkf" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.966233 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-operator-scripts\") pod \"nova-cell1-db-create-fdtph\" (UID: \"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad\") " pod="openstack/nova-cell1-db-create-fdtph" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.972363 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7cea-account-create-update-jt957"] Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.973474 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7cea-account-create-update-jt957" Jan 23 12:13:43 crc kubenswrapper[4865]: I0123 12:13:43.978273 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.010903 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btt5p\" (UniqueName: \"kubernetes.io/projected/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-kube-api-access-btt5p\") pod \"nova-cell1-db-create-fdtph\" (UID: \"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad\") " pod="openstack/nova-cell1-db-create-fdtph" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.024741 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7cea-account-create-update-jt957"] Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.072969 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-operator-scripts\") pod \"nova-cell0-42dc-account-create-update-xcdlv\" (UID: \"395ce09d-bc53-4477-a5a7-c9ee9ab183ca\") " pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.073217 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br6t6\" (UniqueName: \"kubernetes.io/projected/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-kube-api-access-br6t6\") pod \"nova-cell0-42dc-account-create-update-xcdlv\" (UID: \"395ce09d-bc53-4477-a5a7-c9ee9ab183ca\") " pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.090304 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.090333 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.093909 4865 generic.go:334] "Generic (PLEG): container finished" podID="dc78e553-ea01-4581-b947-c4cff5f2ba13" containerID="7c5512fb92748e26a079515280f375e6f2b357d6837ba7f5d52f4e787ff04d46" exitCode=143 Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.093947 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc78e553-ea01-4581-b947-c4cff5f2ba13","Type":"ContainerDied","Data":"7c5512fb92748e26a079515280f375e6f2b357d6837ba7f5d52f4e787ff04d46"} Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.099889 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.175997 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-operator-scripts\") pod \"nova-cell0-42dc-account-create-update-xcdlv\" (UID: \"395ce09d-bc53-4477-a5a7-c9ee9ab183ca\") " pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.181274 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qgzp\" (UniqueName: \"kubernetes.io/projected/73cc2832-0e03-44be-8cef-a9af622068cf-kube-api-access-2qgzp\") pod \"nova-cell1-7cea-account-create-update-jt957\" (UID: \"73cc2832-0e03-44be-8cef-a9af622068cf\") " pod="openstack/nova-cell1-7cea-account-create-update-jt957" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.181376 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br6t6\" (UniqueName: \"kubernetes.io/projected/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-kube-api-access-br6t6\") pod \"nova-cell0-42dc-account-create-update-xcdlv\" (UID: \"395ce09d-bc53-4477-a5a7-c9ee9ab183ca\") " pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.181497 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73cc2832-0e03-44be-8cef-a9af622068cf-operator-scripts\") pod \"nova-cell1-7cea-account-create-update-jt957\" (UID: \"73cc2832-0e03-44be-8cef-a9af622068cf\") " pod="openstack/nova-cell1-7cea-account-create-update-jt957" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.179640 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-operator-scripts\") pod \"nova-cell0-42dc-account-create-update-xcdlv\" (UID: \"395ce09d-bc53-4477-a5a7-c9ee9ab183ca\") " pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.216193 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br6t6\" (UniqueName: \"kubernetes.io/projected/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-kube-api-access-br6t6\") pod \"nova-cell0-42dc-account-create-update-xcdlv\" (UID: \"395ce09d-bc53-4477-a5a7-c9ee9ab183ca\") " pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.218297 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdtph" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.259054 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.292501 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qgzp\" (UniqueName: \"kubernetes.io/projected/73cc2832-0e03-44be-8cef-a9af622068cf-kube-api-access-2qgzp\") pod \"nova-cell1-7cea-account-create-update-jt957\" (UID: \"73cc2832-0e03-44be-8cef-a9af622068cf\") " pod="openstack/nova-cell1-7cea-account-create-update-jt957" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.292754 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73cc2832-0e03-44be-8cef-a9af622068cf-operator-scripts\") pod \"nova-cell1-7cea-account-create-update-jt957\" (UID: \"73cc2832-0e03-44be-8cef-a9af622068cf\") " pod="openstack/nova-cell1-7cea-account-create-update-jt957" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.299439 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73cc2832-0e03-44be-8cef-a9af622068cf-operator-scripts\") pod \"nova-cell1-7cea-account-create-update-jt957\" (UID: \"73cc2832-0e03-44be-8cef-a9af622068cf\") " pod="openstack/nova-cell1-7cea-account-create-update-jt957" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.349588 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qgzp\" (UniqueName: \"kubernetes.io/projected/73cc2832-0e03-44be-8cef-a9af622068cf-kube-api-access-2qgzp\") pod \"nova-cell1-7cea-account-create-update-jt957\" (UID: \"73cc2832-0e03-44be-8cef-a9af622068cf\") " pod="openstack/nova-cell1-7cea-account-create-update-jt957" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.395163 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.524212 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-scripts\") pod \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.524317 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.524343 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-logs\") pod \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.524387 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-internal-tls-certs\") pod \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.524445 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-combined-ca-bundle\") pod \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.524565 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-config-data\") pod \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.524632 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-httpd-run\") pod \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.524656 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgjl8\" (UniqueName: \"kubernetes.io/projected/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-kube-api-access-wgjl8\") pod \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\" (UID: \"eec67cc0-b9ae-4767-82b9-ffa764ab07d0\") " Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.528266 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-logs" (OuterVolumeSpecName: "logs") pod "eec67cc0-b9ae-4767-82b9-ffa764ab07d0" (UID: "eec67cc0-b9ae-4767-82b9-ffa764ab07d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.531269 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eec67cc0-b9ae-4767-82b9-ffa764ab07d0" (UID: "eec67cc0-b9ae-4767-82b9-ffa764ab07d0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.539838 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-kube-api-access-wgjl8" (OuterVolumeSpecName: "kube-api-access-wgjl8") pod "eec67cc0-b9ae-4767-82b9-ffa764ab07d0" (UID: "eec67cc0-b9ae-4767-82b9-ffa764ab07d0"). InnerVolumeSpecName "kube-api-access-wgjl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.570384 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "eec67cc0-b9ae-4767-82b9-ffa764ab07d0" (UID: "eec67cc0-b9ae-4767-82b9-ffa764ab07d0"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.583198 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-scripts" (OuterVolumeSpecName: "scripts") pod "eec67cc0-b9ae-4767-82b9-ffa764ab07d0" (UID: "eec67cc0-b9ae-4767-82b9-ffa764ab07d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.609110 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7cea-account-create-update-jt957" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.638283 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.646991 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgjl8\" (UniqueName: \"kubernetes.io/projected/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-kube-api-access-wgjl8\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.647136 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.647256 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.647372 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.653577 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eec67cc0-b9ae-4767-82b9-ffa764ab07d0" (UID: "eec67cc0-b9ae-4767-82b9-ffa764ab07d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.654717 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eec67cc0-b9ae-4767-82b9-ffa764ab07d0" (UID: "eec67cc0-b9ae-4767-82b9-ffa764ab07d0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.716989 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-l8x7m"] Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.735449 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-config-data" (OuterVolumeSpecName: "config-data") pod "eec67cc0-b9ae-4767-82b9-ffa764ab07d0" (UID: "eec67cc0-b9ae-4767-82b9-ffa764ab07d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.749392 4865 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.749436 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.749450 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eec67cc0-b9ae-4767-82b9-ffa764ab07d0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.789066 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.851765 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.971376 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jbdnl"] Jan 23 12:13:44 crc kubenswrapper[4865]: I0123 12:13:44.993213 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-15f5-account-create-update-bxfkf"] Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.103332 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fdtph"] Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.140340 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-l8x7m" event={"ID":"4c20d774-960c-4422-8fa5-2cdc2a6806fe","Type":"ContainerStarted","Data":"606e31ff21ba64d8df7a9bce389048ece59ebd83322bf77c48ffd2041a8a203c"} Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.141965 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jbdnl" event={"ID":"c0cf61d9-b096-4cea-a55e-e54176112f74","Type":"ContainerStarted","Data":"19d4abf2f026a1a27cbb640bdf2d646853696f1ac8a3fdb37fbb89a86bf8dd9a"} Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.172804 4865 generic.go:334] "Generic (PLEG): container finished" podID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerID="62c21373c0eebc0a568adb43c80526621a4e95ed48f1f7ec1047e095cb2d1298" exitCode=137 Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.172891 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerDied","Data":"62c21373c0eebc0a568adb43c80526621a4e95ed48f1f7ec1047e095cb2d1298"} Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.172940 4865 scope.go:117] "RemoveContainer" containerID="374cfbc4973d1db6573e1de86b64036956c8e20a8e1c4509c68c4283e2833d30" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.187912 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"eec67cc0-b9ae-4767-82b9-ffa764ab07d0","Type":"ContainerDied","Data":"9f161d4ec669b2c0808b7da88a17938158076df80bb306c9c7a36ad020e8da6a"} Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.188009 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.198766 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-15f5-account-create-update-bxfkf" event={"ID":"e2b9cf60-06e6-462b-a56d-067f710d4efb","Type":"ContainerStarted","Data":"7fcc72835f9cf545ebc1ec3e4caaecb787e234e737eacbd5f11318c37d68d158"} Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.339397 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.368006 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.388956 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:13:45 crc kubenswrapper[4865]: E0123 12:13:45.389870 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerName="glance-httpd" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.390005 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerName="glance-httpd" Jan 23 12:13:45 crc kubenswrapper[4865]: E0123 12:13:45.390109 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerName="glance-log" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.390182 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerName="glance-log" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.390471 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerName="glance-httpd" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.390564 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" containerName="glance-log" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.402528 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.405655 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.412359 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.423195 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.452072 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-42dc-account-create-update-xcdlv"] Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.599036 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.599078 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.599131 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/169e0a07-5fad-42d6-8333-0ee7b21592c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.599172 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169e0a07-5fad-42d6-8333-0ee7b21592c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.599197 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.599223 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.599284 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbsh8\" (UniqueName: \"kubernetes.io/projected/169e0a07-5fad-42d6-8333-0ee7b21592c3-kube-api-access-xbsh8\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.599324 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.605912 4865 scope.go:117] "RemoveContainer" containerID="894e90243cbb2803719bc08e23ac7ff919f356abcb7852d076b6e090eca33fce" Jan 23 12:13:45 crc kubenswrapper[4865]: W0123 12:13:45.634058 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod395ce09d_bc53_4477_a5a7_c9ee9ab183ca.slice/crio-1eaa08d2b67c08b2fa17179f70da4411f3a8d2f44a0d345ded61d8e3e13bcf59 WatchSource:0}: Error finding container 1eaa08d2b67c08b2fa17179f70da4411f3a8d2f44a0d345ded61d8e3e13bcf59: Status 404 returned error can't find the container with id 1eaa08d2b67c08b2fa17179f70da4411f3a8d2f44a0d345ded61d8e3e13bcf59 Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.673111 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7cea-account-create-update-jt957"] Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.703412 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/169e0a07-5fad-42d6-8333-0ee7b21592c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.703494 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169e0a07-5fad-42d6-8333-0ee7b21592c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.703534 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.703570 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.703651 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbsh8\" (UniqueName: \"kubernetes.io/projected/169e0a07-5fad-42d6-8333-0ee7b21592c3-kube-api-access-xbsh8\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.703708 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.703751 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.703774 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.714553 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/169e0a07-5fad-42d6-8333-0ee7b21592c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.714778 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/169e0a07-5fad-42d6-8333-0ee7b21592c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.719330 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.720874 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.727338 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.729233 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.766898 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169e0a07-5fad-42d6-8333-0ee7b21592c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.779030 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbsh8\" (UniqueName: \"kubernetes.io/projected/169e0a07-5fad-42d6-8333-0ee7b21592c3-kube-api-access-xbsh8\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.824125 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"169e0a07-5fad-42d6-8333-0ee7b21592c3\") " pod="openstack/glance-default-internal-api-0" Jan 23 12:13:45 crc kubenswrapper[4865]: I0123 12:13:45.875401 4865 scope.go:117] "RemoveContainer" containerID="e8ae1337416c44ff566331434359686312e110501cf8946a2f15b5443c64207e" Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.055035 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.150650 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eec67cc0-b9ae-4767-82b9-ffa764ab07d0" path="/var/lib/kubelet/pods/eec67cc0-b9ae-4767-82b9-ffa764ab07d0/volumes" Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.282546 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7cea-account-create-update-jt957" event={"ID":"73cc2832-0e03-44be-8cef-a9af622068cf","Type":"ContainerStarted","Data":"db4bf99e56412d35d970b7df3160a1c2dca386110c2bccc77ccdc28db9fee656"} Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.315410 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" event={"ID":"395ce09d-bc53-4477-a5a7-c9ee9ab183ca","Type":"ContainerStarted","Data":"a25df821220f84802a1a708e0a82db1e9d97e6549f229a8264eb31ac3fd33d02"} Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.315759 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" event={"ID":"395ce09d-bc53-4477-a5a7-c9ee9ab183ca","Type":"ContainerStarted","Data":"1eaa08d2b67c08b2fa17179f70da4411f3a8d2f44a0d345ded61d8e3e13bcf59"} Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.350888 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-15f5-account-create-update-bxfkf" event={"ID":"e2b9cf60-06e6-462b-a56d-067f710d4efb","Type":"ContainerStarted","Data":"37ae52e5c7ba61596a6f379b708473bef10c2dc88e77d8dc66cd7975b71d7532"} Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.378950 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-l8x7m" event={"ID":"4c20d774-960c-4422-8fa5-2cdc2a6806fe","Type":"ContainerStarted","Data":"92214d2d7c5655ee86575ed783d48c87c5b23b09863eec3d1dac4a434bf7cb11"} Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.408543 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdtph" event={"ID":"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad","Type":"ContainerStarted","Data":"036cf6765ec8ca27aec5a07feb99a91870a62ee29ab4901b93855af0dc28d39e"} Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.408585 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdtph" event={"ID":"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad","Type":"ContainerStarted","Data":"79327f8b27bbc06abc7deaad41a823b5135569920ea12ac588e47fe6d73ffe33"} Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.471903 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jbdnl" event={"ID":"c0cf61d9-b096-4cea-a55e-e54176112f74","Type":"ContainerStarted","Data":"17966645b840cb4d6436ea87662f1de3aec6998ceefce2b3e85e5303efcf3ff2"} Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.481824 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerStarted","Data":"5742c7fb47488dc050a829f7c69eb88fe730402225438befdb4f7b95a364495a"} Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.552981 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-15f5-account-create-update-bxfkf" podStartSLOduration=3.552957522 podStartE2EDuration="3.552957522s" podCreationTimestamp="2026-01-23 12:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:46.410628039 +0000 UTC m=+1270.579700275" watchObservedRunningTime="2026-01-23 12:13:46.552957522 +0000 UTC m=+1270.722029748" Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.595433 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-fdtph" podStartSLOduration=3.5954144489999997 podStartE2EDuration="3.595414449s" podCreationTimestamp="2026-01-23 12:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:46.479964127 +0000 UTC m=+1270.649036453" watchObservedRunningTime="2026-01-23 12:13:46.595414449 +0000 UTC m=+1270.764486675" Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.618228 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-l8x7m" podStartSLOduration=3.61820554 podStartE2EDuration="3.61820554s" podCreationTimestamp="2026-01-23 12:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:46.50241675 +0000 UTC m=+1270.671488966" watchObservedRunningTime="2026-01-23 12:13:46.61820554 +0000 UTC m=+1270.787277786" Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.637987 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-jbdnl" podStartSLOduration=3.637967578 podStartE2EDuration="3.637967578s" podCreationTimestamp="2026-01-23 12:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:46.566497569 +0000 UTC m=+1270.735569795" watchObservedRunningTime="2026-01-23 12:13:46.637967578 +0000 UTC m=+1270.807039804" Jan 23 12:13:46 crc kubenswrapper[4865]: I0123 12:13:46.999439 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.506754 4865 generic.go:334] "Generic (PLEG): container finished" podID="bec931e2-d1cc-4fe3-aaa2-4bb4b141acad" containerID="036cf6765ec8ca27aec5a07feb99a91870a62ee29ab4901b93855af0dc28d39e" exitCode=0 Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.507383 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdtph" event={"ID":"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad","Type":"ContainerDied","Data":"036cf6765ec8ca27aec5a07feb99a91870a62ee29ab4901b93855af0dc28d39e"} Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.527867 4865 generic.go:334] "Generic (PLEG): container finished" podID="4c20d774-960c-4422-8fa5-2cdc2a6806fe" containerID="92214d2d7c5655ee86575ed783d48c87c5b23b09863eec3d1dac4a434bf7cb11" exitCode=0 Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.527978 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-l8x7m" event={"ID":"4c20d774-960c-4422-8fa5-2cdc2a6806fe","Type":"ContainerDied","Data":"92214d2d7c5655ee86575ed783d48c87c5b23b09863eec3d1dac4a434bf7cb11"} Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.529725 4865 generic.go:334] "Generic (PLEG): container finished" podID="c0cf61d9-b096-4cea-a55e-e54176112f74" containerID="17966645b840cb4d6436ea87662f1de3aec6998ceefce2b3e85e5303efcf3ff2" exitCode=0 Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.529783 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jbdnl" event={"ID":"c0cf61d9-b096-4cea-a55e-e54176112f74","Type":"ContainerDied","Data":"17966645b840cb4d6436ea87662f1de3aec6998ceefce2b3e85e5303efcf3ff2"} Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.589896 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"169e0a07-5fad-42d6-8333-0ee7b21592c3","Type":"ContainerStarted","Data":"f1e70c5609621e30df11cd4012b230668e9401d6e18f2efefa728c1457fc6dc0"} Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.637331 4865 generic.go:334] "Generic (PLEG): container finished" podID="dc78e553-ea01-4581-b947-c4cff5f2ba13" containerID="b99b9fe2bb82811b02471d95a3a65c8db93f84602ed05c7a379db454b7c48f4e" exitCode=0 Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.637468 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc78e553-ea01-4581-b947-c4cff5f2ba13","Type":"ContainerDied","Data":"b99b9fe2bb82811b02471d95a3a65c8db93f84602ed05c7a379db454b7c48f4e"} Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.684388 4865 generic.go:334] "Generic (PLEG): container finished" podID="73cc2832-0e03-44be-8cef-a9af622068cf" containerID="9d12564bf440a8d6d3f4d76a1a9dbfa6604e3f09fc2c5046cb9ca90944da45d3" exitCode=0 Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.684452 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7cea-account-create-update-jt957" event={"ID":"73cc2832-0e03-44be-8cef-a9af622068cf","Type":"ContainerDied","Data":"9d12564bf440a8d6d3f4d76a1a9dbfa6604e3f09fc2c5046cb9ca90944da45d3"} Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.707139 4865 generic.go:334] "Generic (PLEG): container finished" podID="e2b9cf60-06e6-462b-a56d-067f710d4efb" containerID="37ae52e5c7ba61596a6f379b708473bef10c2dc88e77d8dc66cd7975b71d7532" exitCode=0 Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.707245 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-15f5-account-create-update-bxfkf" event={"ID":"e2b9cf60-06e6-462b-a56d-067f710d4efb","Type":"ContainerDied","Data":"37ae52e5c7ba61596a6f379b708473bef10c2dc88e77d8dc66cd7975b71d7532"} Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.714745 4865 generic.go:334] "Generic (PLEG): container finished" podID="395ce09d-bc53-4477-a5a7-c9ee9ab183ca" containerID="a25df821220f84802a1a708e0a82db1e9d97e6549f229a8264eb31ac3fd33d02" exitCode=0 Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.714907 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" event={"ID":"395ce09d-bc53-4477-a5a7-c9ee9ab183ca","Type":"ContainerDied","Data":"a25df821220f84802a1a708e0a82db1e9d97e6549f229a8264eb31ac3fd33d02"} Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.716272 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.825667 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87csn\" (UniqueName: \"kubernetes.io/projected/dc78e553-ea01-4581-b947-c4cff5f2ba13-kube-api-access-87csn\") pod \"dc78e553-ea01-4581-b947-c4cff5f2ba13\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.825746 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-scripts\") pod \"dc78e553-ea01-4581-b947-c4cff5f2ba13\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.825774 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"dc78e553-ea01-4581-b947-c4cff5f2ba13\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.825802 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-httpd-run\") pod \"dc78e553-ea01-4581-b947-c4cff5f2ba13\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.825818 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-combined-ca-bundle\") pod \"dc78e553-ea01-4581-b947-c4cff5f2ba13\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.825862 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-logs\") pod \"dc78e553-ea01-4581-b947-c4cff5f2ba13\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.825880 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-public-tls-certs\") pod \"dc78e553-ea01-4581-b947-c4cff5f2ba13\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.825925 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-config-data\") pod \"dc78e553-ea01-4581-b947-c4cff5f2ba13\" (UID: \"dc78e553-ea01-4581-b947-c4cff5f2ba13\") " Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.830919 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-logs" (OuterVolumeSpecName: "logs") pod "dc78e553-ea01-4581-b947-c4cff5f2ba13" (UID: "dc78e553-ea01-4581-b947-c4cff5f2ba13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.834995 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dc78e553-ea01-4581-b947-c4cff5f2ba13" (UID: "dc78e553-ea01-4581-b947-c4cff5f2ba13"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.843251 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-scripts" (OuterVolumeSpecName: "scripts") pod "dc78e553-ea01-4581-b947-c4cff5f2ba13" (UID: "dc78e553-ea01-4581-b947-c4cff5f2ba13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.847275 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "dc78e553-ea01-4581-b947-c4cff5f2ba13" (UID: "dc78e553-ea01-4581-b947-c4cff5f2ba13"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.866373 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc78e553-ea01-4581-b947-c4cff5f2ba13-kube-api-access-87csn" (OuterVolumeSpecName: "kube-api-access-87csn") pod "dc78e553-ea01-4581-b947-c4cff5f2ba13" (UID: "dc78e553-ea01-4581-b947-c4cff5f2ba13"). InnerVolumeSpecName "kube-api-access-87csn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.928030 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87csn\" (UniqueName: \"kubernetes.io/projected/dc78e553-ea01-4581-b947-c4cff5f2ba13-kube-api-access-87csn\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.928353 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.928384 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.928399 4865 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:47 crc kubenswrapper[4865]: I0123 12:13:47.928412 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc78e553-ea01-4581-b947-c4cff5f2ba13-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.030782 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-config-data" (OuterVolumeSpecName: "config-data") pod "dc78e553-ea01-4581-b947-c4cff5f2ba13" (UID: "dc78e553-ea01-4581-b947-c4cff5f2ba13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.033131 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.058365 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.084249 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc78e553-ea01-4581-b947-c4cff5f2ba13" (UID: "dc78e553-ea01-4581-b947-c4cff5f2ba13"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.138224 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.138256 4865 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.156671 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc78e553-ea01-4581-b947-c4cff5f2ba13" (UID: "dc78e553-ea01-4581-b947-c4cff5f2ba13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.243048 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc78e553-ea01-4581-b947-c4cff5f2ba13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.752888 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc78e553-ea01-4581-b947-c4cff5f2ba13","Type":"ContainerDied","Data":"ee2d132745737c55d7577766afd93dffd44962b36efa90f13e080713e154b1b6"} Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.753225 4865 scope.go:117] "RemoveContainer" containerID="b99b9fe2bb82811b02471d95a3a65c8db93f84602ed05c7a379db454b7c48f4e" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.752934 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.758696 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"169e0a07-5fad-42d6-8333-0ee7b21592c3","Type":"ContainerStarted","Data":"6d0450306b796fc81e960a1282a81baec87403e6ce7ccd820553e2a49e59db34"} Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.779082 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.779145 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.779189 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.779937 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8f1e0b3d016dae118cc529905287d1f5d83d908d73deab63599d7b4262f2021"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.779986 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://e8f1e0b3d016dae118cc529905287d1f5d83d908d73deab63599d7b4262f2021" gracePeriod=600 Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.795498 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.812918 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.836859 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:13:48 crc kubenswrapper[4865]: E0123 12:13:48.837336 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc78e553-ea01-4581-b947-c4cff5f2ba13" containerName="glance-httpd" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.837353 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc78e553-ea01-4581-b947-c4cff5f2ba13" containerName="glance-httpd" Jan 23 12:13:48 crc kubenswrapper[4865]: E0123 12:13:48.837382 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc78e553-ea01-4581-b947-c4cff5f2ba13" containerName="glance-log" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.837392 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc78e553-ea01-4581-b947-c4cff5f2ba13" containerName="glance-log" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.837625 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc78e553-ea01-4581-b947-c4cff5f2ba13" containerName="glance-httpd" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.837665 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc78e553-ea01-4581-b947-c4cff5f2ba13" containerName="glance-log" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.841811 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.844843 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.845092 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.847706 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.893857 4865 scope.go:117] "RemoveContainer" containerID="7c5512fb92748e26a079515280f375e6f2b357d6837ba7f5d52f4e787ff04d46" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.975943 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/112d20ec-69d9-42dc-a449-dcf0d0db28b0-logs\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.978207 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.978301 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.984809 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f2vj\" (UniqueName: \"kubernetes.io/projected/112d20ec-69d9-42dc-a449-dcf0d0db28b0-kube-api-access-7f2vj\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.985055 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.985200 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/112d20ec-69d9-42dc-a449-dcf0d0db28b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.985237 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:48 crc kubenswrapper[4865]: I0123 12:13:48.985738 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:48 crc kubenswrapper[4865]: E0123 12:13:48.994647 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 12:13:49 crc kubenswrapper[4865]: E0123 12:13:49.001012 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 12:13:49 crc kubenswrapper[4865]: E0123 12:13:49.021321 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 23 12:13:49 crc kubenswrapper[4865]: E0123 12:13:49.021408 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-696547d8cb-9scxl" podUID="8de820a5-b4df-48bb-aa66-756fe92e787d" containerName="heat-engine" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.088789 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/112d20ec-69d9-42dc-a449-dcf0d0db28b0-logs\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.089049 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.089171 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.089306 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f2vj\" (UniqueName: \"kubernetes.io/projected/112d20ec-69d9-42dc-a449-dcf0d0db28b0-kube-api-access-7f2vj\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.089418 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.089547 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/112d20ec-69d9-42dc-a449-dcf0d0db28b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.089668 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.089781 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.090198 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.092859 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/112d20ec-69d9-42dc-a449-dcf0d0db28b0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.093122 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/112d20ec-69d9-42dc-a449-dcf0d0db28b0-logs\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.104054 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.133256 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-scripts\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.133909 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-config-data\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.134729 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112d20ec-69d9-42dc-a449-dcf0d0db28b0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.161365 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f2vj\" (UniqueName: \"kubernetes.io/projected/112d20ec-69d9-42dc-a449-dcf0d0db28b0-kube-api-access-7f2vj\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.185526 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"112d20ec-69d9-42dc-a449-dcf0d0db28b0\") " pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.404214 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7cea-account-create-update-jt957" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.475715 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.504012 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73cc2832-0e03-44be-8cef-a9af622068cf-operator-scripts\") pod \"73cc2832-0e03-44be-8cef-a9af622068cf\" (UID: \"73cc2832-0e03-44be-8cef-a9af622068cf\") " Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.504301 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qgzp\" (UniqueName: \"kubernetes.io/projected/73cc2832-0e03-44be-8cef-a9af622068cf-kube-api-access-2qgzp\") pod \"73cc2832-0e03-44be-8cef-a9af622068cf\" (UID: \"73cc2832-0e03-44be-8cef-a9af622068cf\") " Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.506439 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73cc2832-0e03-44be-8cef-a9af622068cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73cc2832-0e03-44be-8cef-a9af622068cf" (UID: "73cc2832-0e03-44be-8cef-a9af622068cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.506645 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73cc2832-0e03-44be-8cef-a9af622068cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.530892 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73cc2832-0e03-44be-8cef-a9af622068cf-kube-api-access-2qgzp" (OuterVolumeSpecName: "kube-api-access-2qgzp") pod "73cc2832-0e03-44be-8cef-a9af622068cf" (UID: "73cc2832-0e03-44be-8cef-a9af622068cf"). InnerVolumeSpecName "kube-api-access-2qgzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.621770 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qgzp\" (UniqueName: \"kubernetes.io/projected/73cc2832-0e03-44be-8cef-a9af622068cf-kube-api-access-2qgzp\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.845135 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="e8f1e0b3d016dae118cc529905287d1f5d83d908d73deab63599d7b4262f2021" exitCode=0 Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.845501 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"e8f1e0b3d016dae118cc529905287d1f5d83d908d73deab63599d7b4262f2021"} Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.845536 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"f27c5e1f3f822d3f73db149902949b4aa2098b5ef3e947246d94e8825258d08b"} Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.845554 4865 scope.go:117] "RemoveContainer" containerID="345cdb54622a6a314c05af6fc9f3dea4d21afb272e6e5c0d8f125f9458dfa194" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.850187 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7cea-account-create-update-jt957" event={"ID":"73cc2832-0e03-44be-8cef-a9af622068cf","Type":"ContainerDied","Data":"db4bf99e56412d35d970b7df3160a1c2dca386110c2bccc77ccdc28db9fee656"} Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.850216 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4bf99e56412d35d970b7df3160a1c2dca386110c2bccc77ccdc28db9fee656" Jan 23 12:13:49 crc kubenswrapper[4865]: I0123 12:13:49.850268 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7cea-account-create-update-jt957" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.104133 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jbdnl" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.124850 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.144306 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cf61d9-b096-4cea-a55e-e54176112f74-operator-scripts\") pod \"c0cf61d9-b096-4cea-a55e-e54176112f74\" (UID: \"c0cf61d9-b096-4cea-a55e-e54176112f74\") " Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.144756 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqk4w\" (UniqueName: \"kubernetes.io/projected/c0cf61d9-b096-4cea-a55e-e54176112f74-kube-api-access-sqk4w\") pod \"c0cf61d9-b096-4cea-a55e-e54176112f74\" (UID: \"c0cf61d9-b096-4cea-a55e-e54176112f74\") " Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.146008 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdtph" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.146888 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0cf61d9-b096-4cea-a55e-e54176112f74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0cf61d9-b096-4cea-a55e-e54176112f74" (UID: "c0cf61d9-b096-4cea-a55e-e54176112f74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.152558 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc78e553-ea01-4581-b947-c4cff5f2ba13" path="/var/lib/kubelet/pods/dc78e553-ea01-4581-b947-c4cff5f2ba13/volumes" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.157975 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-15f5-account-create-update-bxfkf" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.197900 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0cf61d9-b096-4cea-a55e-e54176112f74-kube-api-access-sqk4w" (OuterVolumeSpecName: "kube-api-access-sqk4w") pod "c0cf61d9-b096-4cea-a55e-e54176112f74" (UID: "c0cf61d9-b096-4cea-a55e-e54176112f74"). InnerVolumeSpecName "kube-api-access-sqk4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.206242 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l8x7m" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.251219 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c20d774-960c-4422-8fa5-2cdc2a6806fe-operator-scripts\") pod \"4c20d774-960c-4422-8fa5-2cdc2a6806fe\" (UID: \"4c20d774-960c-4422-8fa5-2cdc2a6806fe\") " Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.251294 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b9cf60-06e6-462b-a56d-067f710d4efb-operator-scripts\") pod \"e2b9cf60-06e6-462b-a56d-067f710d4efb\" (UID: \"e2b9cf60-06e6-462b-a56d-067f710d4efb\") " Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.251332 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br6t6\" (UniqueName: \"kubernetes.io/projected/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-kube-api-access-br6t6\") pod \"395ce09d-bc53-4477-a5a7-c9ee9ab183ca\" (UID: \"395ce09d-bc53-4477-a5a7-c9ee9ab183ca\") " Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.251426 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-operator-scripts\") pod \"395ce09d-bc53-4477-a5a7-c9ee9ab183ca\" (UID: \"395ce09d-bc53-4477-a5a7-c9ee9ab183ca\") " Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.251583 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btt5p\" (UniqueName: \"kubernetes.io/projected/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-kube-api-access-btt5p\") pod \"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad\" (UID: \"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad\") " Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.251706 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4g9n\" (UniqueName: \"kubernetes.io/projected/4c20d774-960c-4422-8fa5-2cdc2a6806fe-kube-api-access-p4g9n\") pod \"4c20d774-960c-4422-8fa5-2cdc2a6806fe\" (UID: \"4c20d774-960c-4422-8fa5-2cdc2a6806fe\") " Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.251751 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-operator-scripts\") pod \"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad\" (UID: \"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad\") " Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.251803 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwl8f\" (UniqueName: \"kubernetes.io/projected/e2b9cf60-06e6-462b-a56d-067f710d4efb-kube-api-access-rwl8f\") pod \"e2b9cf60-06e6-462b-a56d-067f710d4efb\" (UID: \"e2b9cf60-06e6-462b-a56d-067f710d4efb\") " Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.252960 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0cf61d9-b096-4cea-a55e-e54176112f74-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.252998 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqk4w\" (UniqueName: \"kubernetes.io/projected/c0cf61d9-b096-4cea-a55e-e54176112f74-kube-api-access-sqk4w\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.254215 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "395ce09d-bc53-4477-a5a7-c9ee9ab183ca" (UID: "395ce09d-bc53-4477-a5a7-c9ee9ab183ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.254700 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c20d774-960c-4422-8fa5-2cdc2a6806fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c20d774-960c-4422-8fa5-2cdc2a6806fe" (UID: "4c20d774-960c-4422-8fa5-2cdc2a6806fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.255119 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2b9cf60-06e6-462b-a56d-067f710d4efb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2b9cf60-06e6-462b-a56d-067f710d4efb" (UID: "e2b9cf60-06e6-462b-a56d-067f710d4efb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.261637 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bec931e2-d1cc-4fe3-aaa2-4bb4b141acad" (UID: "bec931e2-d1cc-4fe3-aaa2-4bb4b141acad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.300001 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-kube-api-access-br6t6" (OuterVolumeSpecName: "kube-api-access-br6t6") pod "395ce09d-bc53-4477-a5a7-c9ee9ab183ca" (UID: "395ce09d-bc53-4477-a5a7-c9ee9ab183ca"). InnerVolumeSpecName "kube-api-access-br6t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.300102 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2b9cf60-06e6-462b-a56d-067f710d4efb-kube-api-access-rwl8f" (OuterVolumeSpecName: "kube-api-access-rwl8f") pod "e2b9cf60-06e6-462b-a56d-067f710d4efb" (UID: "e2b9cf60-06e6-462b-a56d-067f710d4efb"). InnerVolumeSpecName "kube-api-access-rwl8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.313818 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c20d774-960c-4422-8fa5-2cdc2a6806fe-kube-api-access-p4g9n" (OuterVolumeSpecName: "kube-api-access-p4g9n") pod "4c20d774-960c-4422-8fa5-2cdc2a6806fe" (UID: "4c20d774-960c-4422-8fa5-2cdc2a6806fe"). InnerVolumeSpecName "kube-api-access-p4g9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.323923 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-kube-api-access-btt5p" (OuterVolumeSpecName: "kube-api-access-btt5p") pod "bec931e2-d1cc-4fe3-aaa2-4bb4b141acad" (UID: "bec931e2-d1cc-4fe3-aaa2-4bb4b141acad"). InnerVolumeSpecName "kube-api-access-btt5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.363276 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btt5p\" (UniqueName: \"kubernetes.io/projected/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-kube-api-access-btt5p\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.363306 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4g9n\" (UniqueName: \"kubernetes.io/projected/4c20d774-960c-4422-8fa5-2cdc2a6806fe-kube-api-access-p4g9n\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.363346 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.363357 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwl8f\" (UniqueName: \"kubernetes.io/projected/e2b9cf60-06e6-462b-a56d-067f710d4efb-kube-api-access-rwl8f\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.363366 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c20d774-960c-4422-8fa5-2cdc2a6806fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.363375 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b9cf60-06e6-462b-a56d-067f710d4efb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.363386 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br6t6\" (UniqueName: \"kubernetes.io/projected/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-kube-api-access-br6t6\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.363395 4865 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/395ce09d-bc53-4477-a5a7-c9ee9ab183ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.597633 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.903115 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" event={"ID":"395ce09d-bc53-4477-a5a7-c9ee9ab183ca","Type":"ContainerDied","Data":"1eaa08d2b67c08b2fa17179f70da4411f3a8d2f44a0d345ded61d8e3e13bcf59"} Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.903406 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eaa08d2b67c08b2fa17179f70da4411f3a8d2f44a0d345ded61d8e3e13bcf59" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.903505 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-42dc-account-create-update-xcdlv" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.954296 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"112d20ec-69d9-42dc-a449-dcf0d0db28b0","Type":"ContainerStarted","Data":"38af34a44c1386c5c5be5758068e318275f6c238a34573676048d4c75d5b41d4"} Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.958965 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-15f5-account-create-update-bxfkf" event={"ID":"e2b9cf60-06e6-462b-a56d-067f710d4efb","Type":"ContainerDied","Data":"7fcc72835f9cf545ebc1ec3e4caaecb787e234e737eacbd5f11318c37d68d158"} Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.959022 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fcc72835f9cf545ebc1ec3e4caaecb787e234e737eacbd5f11318c37d68d158" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.959124 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-15f5-account-create-update-bxfkf" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.978048 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fdtph" event={"ID":"bec931e2-d1cc-4fe3-aaa2-4bb4b141acad","Type":"ContainerDied","Data":"79327f8b27bbc06abc7deaad41a823b5135569920ea12ac588e47fe6d73ffe33"} Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.978086 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79327f8b27bbc06abc7deaad41a823b5135569920ea12ac588e47fe6d73ffe33" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.978170 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fdtph" Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.981933 4865 generic.go:334] "Generic (PLEG): container finished" podID="8de820a5-b4df-48bb-aa66-756fe92e787d" containerID="191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106" exitCode=0 Jan 23 12:13:50 crc kubenswrapper[4865]: I0123 12:13:50.981981 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-696547d8cb-9scxl" event={"ID":"8de820a5-b4df-48bb-aa66-756fe92e787d","Type":"ContainerDied","Data":"191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106"} Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.003441 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-l8x7m" event={"ID":"4c20d774-960c-4422-8fa5-2cdc2a6806fe","Type":"ContainerDied","Data":"606e31ff21ba64d8df7a9bce389048ece59ebd83322bf77c48ffd2041a8a203c"} Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.003487 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="606e31ff21ba64d8df7a9bce389048ece59ebd83322bf77c48ffd2041a8a203c" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.003668 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l8x7m" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.013772 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.018862 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jbdnl" event={"ID":"c0cf61d9-b096-4cea-a55e-e54176112f74","Type":"ContainerDied","Data":"19d4abf2f026a1a27cbb640bdf2d646853696f1ac8a3fdb37fbb89a86bf8dd9a"} Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.018899 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19d4abf2f026a1a27cbb640bdf2d646853696f1ac8a3fdb37fbb89a86bf8dd9a" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.018963 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jbdnl" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.031368 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"169e0a07-5fad-42d6-8333-0ee7b21592c3","Type":"ContainerStarted","Data":"7180911a8b4eaf14e1f94251cd82fbdf01a31a36c333e18f4ebfcca5110de883"} Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.087171 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.087147667 podStartE2EDuration="6.087147667s" podCreationTimestamp="2026-01-23 12:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:51.07692868 +0000 UTC m=+1275.246000906" watchObservedRunningTime="2026-01-23 12:13:51.087147667 +0000 UTC m=+1275.256219883" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.097499 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwpkm\" (UniqueName: \"kubernetes.io/projected/8de820a5-b4df-48bb-aa66-756fe92e787d-kube-api-access-vwpkm\") pod \"8de820a5-b4df-48bb-aa66-756fe92e787d\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.097559 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-combined-ca-bundle\") pod \"8de820a5-b4df-48bb-aa66-756fe92e787d\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.097632 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data\") pod \"8de820a5-b4df-48bb-aa66-756fe92e787d\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.097684 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data-custom\") pod \"8de820a5-b4df-48bb-aa66-756fe92e787d\" (UID: \"8de820a5-b4df-48bb-aa66-756fe92e787d\") " Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.106788 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8de820a5-b4df-48bb-aa66-756fe92e787d" (UID: "8de820a5-b4df-48bb-aa66-756fe92e787d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.121397 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de820a5-b4df-48bb-aa66-756fe92e787d-kube-api-access-vwpkm" (OuterVolumeSpecName: "kube-api-access-vwpkm") pod "8de820a5-b4df-48bb-aa66-756fe92e787d" (UID: "8de820a5-b4df-48bb-aa66-756fe92e787d"). InnerVolumeSpecName "kube-api-access-vwpkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.170966 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8de820a5-b4df-48bb-aa66-756fe92e787d" (UID: "8de820a5-b4df-48bb-aa66-756fe92e787d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.204815 4865 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.204837 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwpkm\" (UniqueName: \"kubernetes.io/projected/8de820a5-b4df-48bb-aa66-756fe92e787d-kube-api-access-vwpkm\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.204846 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.237690 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data" (OuterVolumeSpecName: "config-data") pod "8de820a5-b4df-48bb-aa66-756fe92e787d" (UID: "8de820a5-b4df-48bb-aa66-756fe92e787d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:51 crc kubenswrapper[4865]: I0123 12:13:51.317501 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8de820a5-b4df-48bb-aa66-756fe92e787d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:52 crc kubenswrapper[4865]: I0123 12:13:52.098904 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-696547d8cb-9scxl" event={"ID":"8de820a5-b4df-48bb-aa66-756fe92e787d","Type":"ContainerDied","Data":"f3bf1575715eada97a91318fd65994af6ed350492f78c3a3dd6b2f07a0731959"} Jan 23 12:13:52 crc kubenswrapper[4865]: I0123 12:13:52.099191 4865 scope.go:117] "RemoveContainer" containerID="191bbe76bcac171b7b909265adbeac6a459a23ee5eb37ab9ec0d3ab5466bb106" Jan 23 12:13:52 crc kubenswrapper[4865]: I0123 12:13:52.099238 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:13:52 crc kubenswrapper[4865]: I0123 12:13:52.168863 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"112d20ec-69d9-42dc-a449-dcf0d0db28b0","Type":"ContainerStarted","Data":"1a3b1e1e5436b97eccea8b32e58e6500a5386ced90982102d3027084c57ab67d"} Jan 23 12:13:52 crc kubenswrapper[4865]: I0123 12:13:52.185863 4865 generic.go:334] "Generic (PLEG): container finished" podID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerID="9dc6decb5655961132b32562a08d5a1abba021b417cefcdf705a74dba49340da" exitCode=0 Jan 23 12:13:52 crc kubenswrapper[4865]: I0123 12:13:52.187419 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9d3ade9-0371-4c2b-a038-4f7677fca3c8","Type":"ContainerDied","Data":"9dc6decb5655961132b32562a08d5a1abba021b417cefcdf705a74dba49340da"} Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.137926 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.251491 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9d3ade9-0371-4c2b-a038-4f7677fca3c8","Type":"ContainerDied","Data":"f94bc553373a4c39af5434db30976eace6b4e20b781c9c185dd49a7ea99989f8"} Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.251581 4865 scope.go:117] "RemoveContainer" containerID="4f38abdaff1c96e1a8a0e2c817f82e9215edcea1a882669ff2e5831a01644ed4" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.251882 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.270309 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-sg-core-conf-yaml\") pod \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.274441 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt7ts\" (UniqueName: \"kubernetes.io/projected/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-kube-api-access-gt7ts\") pod \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.274560 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-config-data\") pod \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.274818 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-log-httpd\") pod \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.274883 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-combined-ca-bundle\") pod \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.274942 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-run-httpd\") pod \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.276524 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c9d3ade9-0371-4c2b-a038-4f7677fca3c8" (UID: "c9d3ade9-0371-4c2b-a038-4f7677fca3c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.282942 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c9d3ade9-0371-4c2b-a038-4f7677fca3c8" (UID: "c9d3ade9-0371-4c2b-a038-4f7677fca3c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.306816 4865 scope.go:117] "RemoveContainer" containerID="e3210ac7d5cb77e1f1dd35b8a876cffd128a264256fbf9fa85b095822a33a657" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.343801 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-kube-api-access-gt7ts" (OuterVolumeSpecName: "kube-api-access-gt7ts") pod "c9d3ade9-0371-4c2b-a038-4f7677fca3c8" (UID: "c9d3ade9-0371-4c2b-a038-4f7677fca3c8"). InnerVolumeSpecName "kube-api-access-gt7ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.381689 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-scripts\") pod \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\" (UID: \"c9d3ade9-0371-4c2b-a038-4f7677fca3c8\") " Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.381982 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt7ts\" (UniqueName: \"kubernetes.io/projected/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-kube-api-access-gt7ts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.382000 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.382009 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.398646 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c9d3ade9-0371-4c2b-a038-4f7677fca3c8" (UID: "c9d3ade9-0371-4c2b-a038-4f7677fca3c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.399144 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-scripts" (OuterVolumeSpecName: "scripts") pod "c9d3ade9-0371-4c2b-a038-4f7677fca3c8" (UID: "c9d3ade9-0371-4c2b-a038-4f7677fca3c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.482333 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-config-data" (OuterVolumeSpecName: "config-data") pod "c9d3ade9-0371-4c2b-a038-4f7677fca3c8" (UID: "c9d3ade9-0371-4c2b-a038-4f7677fca3c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.483445 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.483536 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.483643 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.490957 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9d3ade9-0371-4c2b-a038-4f7677fca3c8" (UID: "c9d3ade9-0371-4c2b-a038-4f7677fca3c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.564360 4865 scope.go:117] "RemoveContainer" containerID="fb2a20186f3fff22d32bac009ff3eaae2ec790c1546606d2bcf96bb9fa6ff7a9" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.585917 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9d3ade9-0371-4c2b-a038-4f7677fca3c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.590638 4865 scope.go:117] "RemoveContainer" containerID="9dc6decb5655961132b32562a08d5a1abba021b417cefcdf705a74dba49340da" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.600762 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.613663 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.624974 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.625507 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de820a5-b4df-48bb-aa66-756fe92e787d" containerName="heat-engine" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.625572 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de820a5-b4df-48bb-aa66-756fe92e787d" containerName="heat-engine" Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.625651 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395ce09d-bc53-4477-a5a7-c9ee9ab183ca" containerName="mariadb-account-create-update" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.625706 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="395ce09d-bc53-4477-a5a7-c9ee9ab183ca" containerName="mariadb-account-create-update" Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.625800 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2b9cf60-06e6-462b-a56d-067f710d4efb" containerName="mariadb-account-create-update" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.625864 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2b9cf60-06e6-462b-a56d-067f710d4efb" containerName="mariadb-account-create-update" Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.625922 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="ceilometer-central-agent" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.625974 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="ceilometer-central-agent" Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.626034 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="sg-core" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.626091 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="sg-core" Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.626187 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c20d774-960c-4422-8fa5-2cdc2a6806fe" containerName="mariadb-database-create" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.626242 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c20d774-960c-4422-8fa5-2cdc2a6806fe" containerName="mariadb-database-create" Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.626294 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="ceilometer-notification-agent" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.626345 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="ceilometer-notification-agent" Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.626397 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="proxy-httpd" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.629527 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="proxy-httpd" Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.629674 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73cc2832-0e03-44be-8cef-a9af622068cf" containerName="mariadb-account-create-update" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.629747 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="73cc2832-0e03-44be-8cef-a9af622068cf" containerName="mariadb-account-create-update" Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.629822 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0cf61d9-b096-4cea-a55e-e54176112f74" containerName="mariadb-database-create" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.629902 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0cf61d9-b096-4cea-a55e-e54176112f74" containerName="mariadb-database-create" Jan 23 12:13:53 crc kubenswrapper[4865]: E0123 12:13:53.629976 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec931e2-d1cc-4fe3-aaa2-4bb4b141acad" containerName="mariadb-database-create" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.630029 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec931e2-d1cc-4fe3-aaa2-4bb4b141acad" containerName="mariadb-database-create" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.630371 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="73cc2832-0e03-44be-8cef-a9af622068cf" containerName="mariadb-account-create-update" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.630446 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec931e2-d1cc-4fe3-aaa2-4bb4b141acad" containerName="mariadb-database-create" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.630512 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b9cf60-06e6-462b-a56d-067f710d4efb" containerName="mariadb-account-create-update" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.630566 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0cf61d9-b096-4cea-a55e-e54176112f74" containerName="mariadb-database-create" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.630638 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="ceilometer-notification-agent" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.630695 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de820a5-b4df-48bb-aa66-756fe92e787d" containerName="heat-engine" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.630758 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="sg-core" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.630903 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="proxy-httpd" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.630965 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" containerName="ceilometer-central-agent" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.631026 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c20d774-960c-4422-8fa5-2cdc2a6806fe" containerName="mariadb-database-create" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.631082 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="395ce09d-bc53-4477-a5a7-c9ee9ab183ca" containerName="mariadb-account-create-update" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.633518 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.636583 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.639520 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.641018 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.692106 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-scripts\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.692190 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.692229 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm557\" (UniqueName: \"kubernetes.io/projected/6ac84596-738e-45f8-99e6-60c38a533175-kube-api-access-wm557\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.692250 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-log-httpd\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.692292 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-config-data\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.692334 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-run-httpd\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.692374 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.793253 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-config-data\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.793558 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-run-httpd\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.793612 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.793642 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-scripts\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.793679 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.793708 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm557\" (UniqueName: \"kubernetes.io/projected/6ac84596-738e-45f8-99e6-60c38a533175-kube-api-access-wm557\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.793729 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-log-httpd\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.794082 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-run-httpd\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.794334 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-log-httpd\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.797385 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.798026 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-scripts\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.798240 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-config-data\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.799486 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.899144 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm557\" (UniqueName: \"kubernetes.io/projected/6ac84596-738e-45f8-99e6-60c38a533175-kube-api-access-wm557\") pod \"ceilometer-0\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " pod="openstack/ceilometer-0" Jan 23 12:13:53 crc kubenswrapper[4865]: I0123 12:13:53.966667 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.091417 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.133806 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9d3ade9-0371-4c2b-a038-4f7677fca3c8" path="/var/lib/kubelet/pods/c9d3ade9-0371-4c2b-a038-4f7677fca3c8/volumes" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.279546 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"112d20ec-69d9-42dc-a449-dcf0d0db28b0","Type":"ContainerStarted","Data":"1448b1df9e0b2f7cbeae3a010773c544789849bcf7d1a39da45fcebbc4d95373"} Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.338891 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.33886355 podStartE2EDuration="6.33886355s" podCreationTimestamp="2026-01-23 12:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:13:54.333168393 +0000 UTC m=+1278.502240619" watchObservedRunningTime="2026-01-23 12:13:54.33886355 +0000 UTC m=+1278.507935786" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.354718 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.355654 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.580264 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rq28k"] Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.581750 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.585074 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.586295 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.594329 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5fbhx" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.608969 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rq28k"] Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.675012 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.712411 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.712465 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84mpg\" (UniqueName: \"kubernetes.io/projected/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-kube-api-access-84mpg\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.712499 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-scripts\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.712677 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-config-data\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.814506 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.814850 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84mpg\" (UniqueName: \"kubernetes.io/projected/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-kube-api-access-84mpg\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.814881 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-scripts\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.814909 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-config-data\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.824327 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-scripts\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.825244 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.831083 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-config-data\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.834819 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84mpg\" (UniqueName: \"kubernetes.io/projected/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-kube-api-access-84mpg\") pod \"nova-cell0-conductor-db-sync-rq28k\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:54 crc kubenswrapper[4865]: I0123 12:13:54.906026 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:13:55 crc kubenswrapper[4865]: I0123 12:13:55.311194 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ac84596-738e-45f8-99e6-60c38a533175","Type":"ContainerStarted","Data":"10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e"} Jan 23 12:13:55 crc kubenswrapper[4865]: I0123 12:13:55.311795 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ac84596-738e-45f8-99e6-60c38a533175","Type":"ContainerStarted","Data":"1e58ce3202d41be345519fc39a4fd1d9bcfd8e5af81c89fbfc987c860d04f2d5"} Jan 23 12:13:55 crc kubenswrapper[4865]: I0123 12:13:55.438330 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rq28k"] Jan 23 12:13:55 crc kubenswrapper[4865]: W0123 12:13:55.484182 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb70ef0d_40c1_4ee9_b73e_98b471e378c2.slice/crio-2c14046e1a28a6f22725271885caca8c6aa98cab9311ed84d85b472cae7b5308 WatchSource:0}: Error finding container 2c14046e1a28a6f22725271885caca8c6aa98cab9311ed84d85b472cae7b5308: Status 404 returned error can't find the container with id 2c14046e1a28a6f22725271885caca8c6aa98cab9311ed84d85b472cae7b5308 Jan 23 12:13:56 crc kubenswrapper[4865]: I0123 12:13:56.060434 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:56 crc kubenswrapper[4865]: I0123 12:13:56.060500 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:56 crc kubenswrapper[4865]: I0123 12:13:56.210855 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:56 crc kubenswrapper[4865]: I0123 12:13:56.211197 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:56 crc kubenswrapper[4865]: I0123 12:13:56.336224 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rq28k" event={"ID":"bb70ef0d-40c1-4ee9-b73e-98b471e378c2","Type":"ContainerStarted","Data":"2c14046e1a28a6f22725271885caca8c6aa98cab9311ed84d85b472cae7b5308"} Jan 23 12:13:56 crc kubenswrapper[4865]: I0123 12:13:56.343727 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ac84596-738e-45f8-99e6-60c38a533175","Type":"ContainerStarted","Data":"3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7"} Jan 23 12:13:56 crc kubenswrapper[4865]: I0123 12:13:56.343763 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:56 crc kubenswrapper[4865]: I0123 12:13:56.343865 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 23 12:13:57 crc kubenswrapper[4865]: I0123 12:13:57.363735 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ac84596-738e-45f8-99e6-60c38a533175","Type":"ContainerStarted","Data":"87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0"} Jan 23 12:13:58 crc kubenswrapper[4865]: I0123 12:13:58.371951 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 12:13:58 crc kubenswrapper[4865]: I0123 12:13:58.372163 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 12:13:59 crc kubenswrapper[4865]: I0123 12:13:59.475860 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 12:13:59 crc kubenswrapper[4865]: I0123 12:13:59.476354 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 23 12:13:59 crc kubenswrapper[4865]: I0123 12:13:59.551834 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 12:13:59 crc kubenswrapper[4865]: I0123 12:13:59.579314 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 23 12:14:00 crc kubenswrapper[4865]: I0123 12:14:00.410484 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ac84596-738e-45f8-99e6-60c38a533175","Type":"ContainerStarted","Data":"c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2"} Jan 23 12:14:00 crc kubenswrapper[4865]: I0123 12:14:00.411225 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 12:14:00 crc kubenswrapper[4865]: I0123 12:14:00.411243 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 23 12:14:00 crc kubenswrapper[4865]: I0123 12:14:00.475361 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.050547503 podStartE2EDuration="7.475342292s" podCreationTimestamp="2026-01-23 12:13:53 +0000 UTC" firstStartedPulling="2026-01-23 12:13:54.692792391 +0000 UTC m=+1278.861864617" lastFinishedPulling="2026-01-23 12:13:59.11758718 +0000 UTC m=+1283.286659406" observedRunningTime="2026-01-23 12:14:00.468854016 +0000 UTC m=+1284.637926232" watchObservedRunningTime="2026-01-23 12:14:00.475342292 +0000 UTC m=+1284.644414518" Jan 23 12:14:01 crc kubenswrapper[4865]: I0123 12:14:01.420023 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 12:14:01 crc kubenswrapper[4865]: I0123 12:14:01.907141 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 12:14:01 crc kubenswrapper[4865]: I0123 12:14:01.907276 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 12:14:01 crc kubenswrapper[4865]: I0123 12:14:01.909697 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 23 12:14:02 crc kubenswrapper[4865]: I0123 12:14:02.433908 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 12:14:02 crc kubenswrapper[4865]: I0123 12:14:02.434233 4865 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 12:14:04 crc kubenswrapper[4865]: I0123 12:14:04.089895 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 23 12:14:04 crc kubenswrapper[4865]: I0123 12:14:04.090280 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:14:04 crc kubenswrapper[4865]: I0123 12:14:04.091225 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"773f43c7fc3b7b930164eb5a9391098dcc2b2866277b673db9ee6522e2b623e3"} pod="openstack/horizon-7d44bd7746-lpzlt" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 12:14:04 crc kubenswrapper[4865]: I0123 12:14:04.091263 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" containerID="cri-o://773f43c7fc3b7b930164eb5a9391098dcc2b2866277b673db9ee6522e2b623e3" gracePeriod=30 Jan 23 12:14:04 crc kubenswrapper[4865]: I0123 12:14:04.357387 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:14:09 crc kubenswrapper[4865]: I0123 12:14:09.307413 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 12:14:09 crc kubenswrapper[4865]: I0123 12:14:09.307896 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 23 12:14:14 crc kubenswrapper[4865]: I0123 12:14:14.355204 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:14:15 crc kubenswrapper[4865]: E0123 12:14:15.237009 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-nova-conductor:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:14:15 crc kubenswrapper[4865]: E0123 12:14:15.237059 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-nova-conductor:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:14:15 crc kubenswrapper[4865]: E0123 12:14:15.237162 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-nova-conductor:c3923531bcda0b0811b2d5053f189beb,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-84mpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-rq28k_openstack(bb70ef0d-40c1-4ee9-b73e-98b471e378c2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:14:15 crc kubenswrapper[4865]: E0123 12:14:15.238308 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-rq28k" podUID="bb70ef0d-40c1-4ee9-b73e-98b471e378c2" Jan 23 12:14:15 crc kubenswrapper[4865]: E0123 12:14:15.550982 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-nova-conductor:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-rq28k" podUID="bb70ef0d-40c1-4ee9-b73e-98b471e378c2" Jan 23 12:14:20 crc kubenswrapper[4865]: I0123 12:14:20.286789 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:20 crc kubenswrapper[4865]: I0123 12:14:20.288467 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="ceilometer-central-agent" containerID="cri-o://10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e" gracePeriod=30 Jan 23 12:14:20 crc kubenswrapper[4865]: I0123 12:14:20.289181 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="proxy-httpd" containerID="cri-o://c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2" gracePeriod=30 Jan 23 12:14:20 crc kubenswrapper[4865]: I0123 12:14:20.289229 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="sg-core" containerID="cri-o://87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0" gracePeriod=30 Jan 23 12:14:20 crc kubenswrapper[4865]: I0123 12:14:20.289258 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="ceilometer-notification-agent" containerID="cri-o://3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7" gracePeriod=30 Jan 23 12:14:20 crc kubenswrapper[4865]: I0123 12:14:20.306721 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.195:3000/\": EOF" Jan 23 12:14:20 crc kubenswrapper[4865]: I0123 12:14:20.593765 4865 generic.go:334] "Generic (PLEG): container finished" podID="6ac84596-738e-45f8-99e6-60c38a533175" containerID="c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2" exitCode=0 Jan 23 12:14:20 crc kubenswrapper[4865]: I0123 12:14:20.594009 4865 generic.go:334] "Generic (PLEG): container finished" podID="6ac84596-738e-45f8-99e6-60c38a533175" containerID="87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0" exitCode=2 Jan 23 12:14:20 crc kubenswrapper[4865]: I0123 12:14:20.593830 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ac84596-738e-45f8-99e6-60c38a533175","Type":"ContainerDied","Data":"c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2"} Jan 23 12:14:20 crc kubenswrapper[4865]: I0123 12:14:20.594061 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ac84596-738e-45f8-99e6-60c38a533175","Type":"ContainerDied","Data":"87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0"} Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.550478 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.605513 4865 generic.go:334] "Generic (PLEG): container finished" podID="6ac84596-738e-45f8-99e6-60c38a533175" containerID="3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7" exitCode=0 Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.605541 4865 generic.go:334] "Generic (PLEG): container finished" podID="6ac84596-738e-45f8-99e6-60c38a533175" containerID="10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e" exitCode=0 Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.605556 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ac84596-738e-45f8-99e6-60c38a533175","Type":"ContainerDied","Data":"3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7"} Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.605675 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ac84596-738e-45f8-99e6-60c38a533175","Type":"ContainerDied","Data":"10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e"} Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.605579 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.605707 4865 scope.go:117] "RemoveContainer" containerID="c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.605693 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ac84596-738e-45f8-99e6-60c38a533175","Type":"ContainerDied","Data":"1e58ce3202d41be345519fc39a4fd1d9bcfd8e5af81c89fbfc987c860d04f2d5"} Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.631973 4865 scope.go:117] "RemoveContainer" containerID="87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.655697 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-sg-core-conf-yaml\") pod \"6ac84596-738e-45f8-99e6-60c38a533175\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.655745 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-combined-ca-bundle\") pod \"6ac84596-738e-45f8-99e6-60c38a533175\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.655782 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-scripts\") pod \"6ac84596-738e-45f8-99e6-60c38a533175\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.655851 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-run-httpd\") pod \"6ac84596-738e-45f8-99e6-60c38a533175\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.655945 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm557\" (UniqueName: \"kubernetes.io/projected/6ac84596-738e-45f8-99e6-60c38a533175-kube-api-access-wm557\") pod \"6ac84596-738e-45f8-99e6-60c38a533175\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.656006 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-config-data\") pod \"6ac84596-738e-45f8-99e6-60c38a533175\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.656049 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-log-httpd\") pod \"6ac84596-738e-45f8-99e6-60c38a533175\" (UID: \"6ac84596-738e-45f8-99e6-60c38a533175\") " Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.657463 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6ac84596-738e-45f8-99e6-60c38a533175" (UID: "6ac84596-738e-45f8-99e6-60c38a533175"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.661190 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6ac84596-738e-45f8-99e6-60c38a533175" (UID: "6ac84596-738e-45f8-99e6-60c38a533175"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.663522 4865 scope.go:117] "RemoveContainer" containerID="3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.664855 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-scripts" (OuterVolumeSpecName: "scripts") pod "6ac84596-738e-45f8-99e6-60c38a533175" (UID: "6ac84596-738e-45f8-99e6-60c38a533175"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.665482 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac84596-738e-45f8-99e6-60c38a533175-kube-api-access-wm557" (OuterVolumeSpecName: "kube-api-access-wm557") pod "6ac84596-738e-45f8-99e6-60c38a533175" (UID: "6ac84596-738e-45f8-99e6-60c38a533175"). InnerVolumeSpecName "kube-api-access-wm557". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.703589 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6ac84596-738e-45f8-99e6-60c38a533175" (UID: "6ac84596-738e-45f8-99e6-60c38a533175"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.759058 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.759093 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.759104 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.759116 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm557\" (UniqueName: \"kubernetes.io/projected/6ac84596-738e-45f8-99e6-60c38a533175-kube-api-access-wm557\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.759128 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ac84596-738e-45f8-99e6-60c38a533175-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.787834 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-config-data" (OuterVolumeSpecName: "config-data") pod "6ac84596-738e-45f8-99e6-60c38a533175" (UID: "6ac84596-738e-45f8-99e6-60c38a533175"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.792495 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ac84596-738e-45f8-99e6-60c38a533175" (UID: "6ac84596-738e-45f8-99e6-60c38a533175"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.847118 4865 scope.go:117] "RemoveContainer" containerID="10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.860415 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.860447 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ac84596-738e-45f8-99e6-60c38a533175-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.865962 4865 scope.go:117] "RemoveContainer" containerID="c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2" Jan 23 12:14:21 crc kubenswrapper[4865]: E0123 12:14:21.866434 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2\": container with ID starting with c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2 not found: ID does not exist" containerID="c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.866470 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2"} err="failed to get container status \"c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2\": rpc error: code = NotFound desc = could not find container \"c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2\": container with ID starting with c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2 not found: ID does not exist" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.866497 4865 scope.go:117] "RemoveContainer" containerID="87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0" Jan 23 12:14:21 crc kubenswrapper[4865]: E0123 12:14:21.866919 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0\": container with ID starting with 87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0 not found: ID does not exist" containerID="87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.866953 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0"} err="failed to get container status \"87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0\": rpc error: code = NotFound desc = could not find container \"87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0\": container with ID starting with 87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0 not found: ID does not exist" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.866973 4865 scope.go:117] "RemoveContainer" containerID="3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7" Jan 23 12:14:21 crc kubenswrapper[4865]: E0123 12:14:21.867202 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7\": container with ID starting with 3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7 not found: ID does not exist" containerID="3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.867233 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7"} err="failed to get container status \"3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7\": rpc error: code = NotFound desc = could not find container \"3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7\": container with ID starting with 3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7 not found: ID does not exist" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.867248 4865 scope.go:117] "RemoveContainer" containerID="10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e" Jan 23 12:14:21 crc kubenswrapper[4865]: E0123 12:14:21.867499 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e\": container with ID starting with 10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e not found: ID does not exist" containerID="10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.867531 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e"} err="failed to get container status \"10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e\": rpc error: code = NotFound desc = could not find container \"10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e\": container with ID starting with 10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e not found: ID does not exist" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.867549 4865 scope.go:117] "RemoveContainer" containerID="c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.867849 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2"} err="failed to get container status \"c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2\": rpc error: code = NotFound desc = could not find container \"c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2\": container with ID starting with c1598d031a6cbbe77fd6f3a43573dcfa776797f33f3bc8ff716a0dd9faa983a2 not found: ID does not exist" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.867878 4865 scope.go:117] "RemoveContainer" containerID="87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.868179 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0"} err="failed to get container status \"87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0\": rpc error: code = NotFound desc = could not find container \"87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0\": container with ID starting with 87e78b76bf19beee215f29997029c25b754e8941fab35f20fe9f91a988a207d0 not found: ID does not exist" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.868206 4865 scope.go:117] "RemoveContainer" containerID="3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.868477 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7"} err="failed to get container status \"3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7\": rpc error: code = NotFound desc = could not find container \"3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7\": container with ID starting with 3039c3b2a05335eafb1583a0508d0df0bf3b87d28567ff02861d0a2876182be7 not found: ID does not exist" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.868504 4865 scope.go:117] "RemoveContainer" containerID="10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.868749 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e"} err="failed to get container status \"10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e\": rpc error: code = NotFound desc = could not find container \"10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e\": container with ID starting with 10e6719692a3ef583a164b1e5913d6ac089fe0fe9661f3e4849f8cb0c304fb5e not found: ID does not exist" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.939266 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.951778 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.978371 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:21 crc kubenswrapper[4865]: E0123 12:14:21.978780 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="sg-core" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.978800 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="sg-core" Jan 23 12:14:21 crc kubenswrapper[4865]: E0123 12:14:21.978819 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="proxy-httpd" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.978826 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="proxy-httpd" Jan 23 12:14:21 crc kubenswrapper[4865]: E0123 12:14:21.978836 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="ceilometer-central-agent" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.978844 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="ceilometer-central-agent" Jan 23 12:14:21 crc kubenswrapper[4865]: E0123 12:14:21.978867 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="ceilometer-notification-agent" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.978873 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="ceilometer-notification-agent" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.979046 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="ceilometer-notification-agent" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.979063 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="sg-core" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.979072 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="proxy-httpd" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.979084 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac84596-738e-45f8-99e6-60c38a533175" containerName="ceilometer-central-agent" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.980907 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.982992 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:14:21 crc kubenswrapper[4865]: I0123 12:14:21.983790 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.000410 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.063900 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.063939 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.063972 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjp9\" (UniqueName: \"kubernetes.io/projected/d1ceb32d-51bc-4677-936f-3a48a11624cc-kube-api-access-qsjp9\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.063986 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-scripts\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.064045 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-run-httpd\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.064100 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-log-httpd\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.064119 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-config-data\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.120452 4865 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8de820a5-b4df-48bb-aa66-756fe92e787d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8de820a5-b4df-48bb-aa66-756fe92e787d] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8de820a5_b4df_48bb_aa66_756fe92e787d.slice" Jan 23 12:14:22 crc kubenswrapper[4865]: E0123 12:14:22.120518 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod8de820a5-b4df-48bb-aa66-756fe92e787d] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod8de820a5-b4df-48bb-aa66-756fe92e787d] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8de820a5_b4df_48bb_aa66_756fe92e787d.slice" pod="openstack/heat-engine-696547d8cb-9scxl" podUID="8de820a5-b4df-48bb-aa66-756fe92e787d" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.128994 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac84596-738e-45f8-99e6-60c38a533175" path="/var/lib/kubelet/pods/6ac84596-738e-45f8-99e6-60c38a533175/volumes" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.165958 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.165992 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.166728 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsjp9\" (UniqueName: \"kubernetes.io/projected/d1ceb32d-51bc-4677-936f-3a48a11624cc-kube-api-access-qsjp9\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.166767 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-scripts\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.166961 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-run-httpd\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.167115 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-log-httpd\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.167201 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-config-data\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.167941 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-log-httpd\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.167940 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-run-httpd\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.171193 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.173113 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.174547 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-scripts\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.176022 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-config-data\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.186169 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsjp9\" (UniqueName: \"kubernetes.io/projected/d1ceb32d-51bc-4677-936f-3a48a11624cc-kube-api-access-qsjp9\") pod \"ceilometer-0\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.302006 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.619758 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-696547d8cb-9scxl" Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.645873 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-696547d8cb-9scxl"] Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.653292 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-696547d8cb-9scxl"] Jan 23 12:14:22 crc kubenswrapper[4865]: I0123 12:14:22.759691 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:23 crc kubenswrapper[4865]: I0123 12:14:23.628419 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ceb32d-51bc-4677-936f-3a48a11624cc","Type":"ContainerStarted","Data":"0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e"} Jan 23 12:14:23 crc kubenswrapper[4865]: I0123 12:14:23.628729 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ceb32d-51bc-4677-936f-3a48a11624cc","Type":"ContainerStarted","Data":"4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a"} Jan 23 12:14:23 crc kubenswrapper[4865]: I0123 12:14:23.628740 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ceb32d-51bc-4677-936f-3a48a11624cc","Type":"ContainerStarted","Data":"7c30a58c30d84e45da24761cfea7c1e5db4a437ea7334513c225a10d28a44a36"} Jan 23 12:14:24 crc kubenswrapper[4865]: I0123 12:14:24.129354 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de820a5-b4df-48bb-aa66-756fe92e787d" path="/var/lib/kubelet/pods/8de820a5-b4df-48bb-aa66-756fe92e787d/volumes" Jan 23 12:14:24 crc kubenswrapper[4865]: I0123 12:14:24.640484 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ceb32d-51bc-4677-936f-3a48a11624cc","Type":"ContainerStarted","Data":"e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493"} Jan 23 12:14:27 crc kubenswrapper[4865]: I0123 12:14:27.147515 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:14:27 crc kubenswrapper[4865]: I0123 12:14:27.669820 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rq28k" event={"ID":"bb70ef0d-40c1-4ee9-b73e-98b471e378c2","Type":"ContainerStarted","Data":"e5decde60c0e82eae35d75f68343d15038384d920e1d819c22bed60ed4575d97"} Jan 23 12:14:27 crc kubenswrapper[4865]: I0123 12:14:27.671970 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ceb32d-51bc-4677-936f-3a48a11624cc","Type":"ContainerStarted","Data":"d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f"} Jan 23 12:14:27 crc kubenswrapper[4865]: I0123 12:14:27.672086 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 12:14:27 crc kubenswrapper[4865]: I0123 12:14:27.687624 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rq28k" podStartSLOduration=2.650519452 podStartE2EDuration="33.68759044s" podCreationTimestamp="2026-01-23 12:13:54 +0000 UTC" firstStartedPulling="2026-01-23 12:13:55.492126086 +0000 UTC m=+1279.661198312" lastFinishedPulling="2026-01-23 12:14:26.529197074 +0000 UTC m=+1310.698269300" observedRunningTime="2026-01-23 12:14:27.682194729 +0000 UTC m=+1311.851266965" watchObservedRunningTime="2026-01-23 12:14:27.68759044 +0000 UTC m=+1311.856662666" Jan 23 12:14:27 crc kubenswrapper[4865]: I0123 12:14:27.702119 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.099970736 podStartE2EDuration="6.702101319s" podCreationTimestamp="2026-01-23 12:14:21 +0000 UTC" firstStartedPulling="2026-01-23 12:14:22.77886544 +0000 UTC m=+1306.947937666" lastFinishedPulling="2026-01-23 12:14:26.380995993 +0000 UTC m=+1310.550068249" observedRunningTime="2026-01-23 12:14:27.700557812 +0000 UTC m=+1311.869630038" watchObservedRunningTime="2026-01-23 12:14:27.702101319 +0000 UTC m=+1311.871173545" Jan 23 12:14:29 crc kubenswrapper[4865]: I0123 12:14:29.017423 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:14:29 crc kubenswrapper[4865]: I0123 12:14:29.259467 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d44bd7746-lpzlt"] Jan 23 12:14:34 crc kubenswrapper[4865]: I0123 12:14:34.775917 4865 generic.go:334] "Generic (PLEG): container finished" podID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerID="773f43c7fc3b7b930164eb5a9391098dcc2b2866277b673db9ee6522e2b623e3" exitCode=137 Jan 23 12:14:34 crc kubenswrapper[4865]: I0123 12:14:34.776845 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerDied","Data":"773f43c7fc3b7b930164eb5a9391098dcc2b2866277b673db9ee6522e2b623e3"} Jan 23 12:14:34 crc kubenswrapper[4865]: I0123 12:14:34.780070 4865 scope.go:117] "RemoveContainer" containerID="fecf4f76049da39ba53062cbfb6e4bcdfc58676fe141eea30a24d30464ca2daf" Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.278014 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.278851 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="ceilometer-central-agent" containerID="cri-o://4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a" gracePeriod=30 Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.278911 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="ceilometer-notification-agent" containerID="cri-o://0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e" gracePeriod=30 Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.278917 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="proxy-httpd" containerID="cri-o://d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f" gracePeriod=30 Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.279210 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="sg-core" containerID="cri-o://e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493" gracePeriod=30 Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.296979 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.197:3000/\": EOF" Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.837240 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerID="d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f" exitCode=0 Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.837584 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerID="e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493" exitCode=2 Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.837713 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerID="4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a" exitCode=0 Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.837334 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ceb32d-51bc-4677-936f-3a48a11624cc","Type":"ContainerDied","Data":"d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f"} Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.837846 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ceb32d-51bc-4677-936f-3a48a11624cc","Type":"ContainerDied","Data":"e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493"} Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.837877 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ceb32d-51bc-4677-936f-3a48a11624cc","Type":"ContainerDied","Data":"4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a"} Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.840519 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerStarted","Data":"196d053e89c6c1f701d070483f296d52d75d27bc87e6b3dab1359d2a12168ca1"} Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.840919 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon-log" containerID="cri-o://a9f5b45dcc5b04b3bf3ecb6680aae49876c6d666882bf3eeb621de8ccd4a8a85" gracePeriod=30 Jan 23 12:14:38 crc kubenswrapper[4865]: I0123 12:14:38.840995 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d44bd7746-lpzlt" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" containerID="cri-o://196d053e89c6c1f701d070483f296d52d75d27bc87e6b3dab1359d2a12168ca1" gracePeriod=30 Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.714964 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.833246 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-combined-ca-bundle\") pod \"d1ceb32d-51bc-4677-936f-3a48a11624cc\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.833371 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-run-httpd\") pod \"d1ceb32d-51bc-4677-936f-3a48a11624cc\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.834306 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-log-httpd\") pod \"d1ceb32d-51bc-4677-936f-3a48a11624cc\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.834411 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsjp9\" (UniqueName: \"kubernetes.io/projected/d1ceb32d-51bc-4677-936f-3a48a11624cc-kube-api-access-qsjp9\") pod \"d1ceb32d-51bc-4677-936f-3a48a11624cc\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.834469 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-sg-core-conf-yaml\") pod \"d1ceb32d-51bc-4677-936f-3a48a11624cc\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.834520 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-config-data\") pod \"d1ceb32d-51bc-4677-936f-3a48a11624cc\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.834571 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-scripts\") pod \"d1ceb32d-51bc-4677-936f-3a48a11624cc\" (UID: \"d1ceb32d-51bc-4677-936f-3a48a11624cc\") " Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.834768 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d1ceb32d-51bc-4677-936f-3a48a11624cc" (UID: "d1ceb32d-51bc-4677-936f-3a48a11624cc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.835093 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d1ceb32d-51bc-4677-936f-3a48a11624cc" (UID: "d1ceb32d-51bc-4677-936f-3a48a11624cc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.835373 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.835396 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d1ceb32d-51bc-4677-936f-3a48a11624cc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.849266 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-scripts" (OuterVolumeSpecName: "scripts") pod "d1ceb32d-51bc-4677-936f-3a48a11624cc" (UID: "d1ceb32d-51bc-4677-936f-3a48a11624cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.860844 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1ceb32d-51bc-4677-936f-3a48a11624cc-kube-api-access-qsjp9" (OuterVolumeSpecName: "kube-api-access-qsjp9") pod "d1ceb32d-51bc-4677-936f-3a48a11624cc" (UID: "d1ceb32d-51bc-4677-936f-3a48a11624cc"). InnerVolumeSpecName "kube-api-access-qsjp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.882348 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerID="0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e" exitCode=0 Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.882783 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.883112 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ceb32d-51bc-4677-936f-3a48a11624cc","Type":"ContainerDied","Data":"0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e"} Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.883256 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d1ceb32d-51bc-4677-936f-3a48a11624cc","Type":"ContainerDied","Data":"7c30a58c30d84e45da24761cfea7c1e5db4a437ea7334513c225a10d28a44a36"} Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.883356 4865 scope.go:117] "RemoveContainer" containerID="d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.922781 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d1ceb32d-51bc-4677-936f-3a48a11624cc" (UID: "d1ceb32d-51bc-4677-936f-3a48a11624cc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.942653 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.942682 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsjp9\" (UniqueName: \"kubernetes.io/projected/d1ceb32d-51bc-4677-936f-3a48a11624cc-kube-api-access-qsjp9\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:40 crc kubenswrapper[4865]: I0123 12:14:40.942693 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.002744 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1ceb32d-51bc-4677-936f-3a48a11624cc" (UID: "d1ceb32d-51bc-4677-936f-3a48a11624cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.020153 4865 scope.go:117] "RemoveContainer" containerID="e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.024744 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-config-data" (OuterVolumeSpecName: "config-data") pod "d1ceb32d-51bc-4677-936f-3a48a11624cc" (UID: "d1ceb32d-51bc-4677-936f-3a48a11624cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.046173 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.046212 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ceb32d-51bc-4677-936f-3a48a11624cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.054473 4865 scope.go:117] "RemoveContainer" containerID="0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.077823 4865 scope.go:117] "RemoveContainer" containerID="4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.099428 4865 scope.go:117] "RemoveContainer" containerID="d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f" Jan 23 12:14:41 crc kubenswrapper[4865]: E0123 12:14:41.100032 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f\": container with ID starting with d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f not found: ID does not exist" containerID="d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.100080 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f"} err="failed to get container status \"d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f\": rpc error: code = NotFound desc = could not find container \"d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f\": container with ID starting with d103adaa46225487b3bead6844b637a25102571dace2079ddd2c9503c25a260f not found: ID does not exist" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.100125 4865 scope.go:117] "RemoveContainer" containerID="e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493" Jan 23 12:14:41 crc kubenswrapper[4865]: E0123 12:14:41.100382 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493\": container with ID starting with e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493 not found: ID does not exist" containerID="e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.100405 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493"} err="failed to get container status \"e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493\": rpc error: code = NotFound desc = could not find container \"e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493\": container with ID starting with e33645269308695e5173debe5a2f3f17740561d95e773173501361db73036493 not found: ID does not exist" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.100418 4865 scope.go:117] "RemoveContainer" containerID="0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e" Jan 23 12:14:41 crc kubenswrapper[4865]: E0123 12:14:41.100618 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e\": container with ID starting with 0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e not found: ID does not exist" containerID="0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.100638 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e"} err="failed to get container status \"0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e\": rpc error: code = NotFound desc = could not find container \"0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e\": container with ID starting with 0bf1bdb14fbdb286ffc7e80c1bed896ad6a85e9e118ec0067fbe6e2f0016690e not found: ID does not exist" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.100653 4865 scope.go:117] "RemoveContainer" containerID="4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a" Jan 23 12:14:41 crc kubenswrapper[4865]: E0123 12:14:41.100828 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a\": container with ID starting with 4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a not found: ID does not exist" containerID="4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.100845 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a"} err="failed to get container status \"4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a\": rpc error: code = NotFound desc = could not find container \"4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a\": container with ID starting with 4d96ceaebc03772cda679ae9c8a8ba7427d97cb20d96350735f7dcadaf26ac6a not found: ID does not exist" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.266132 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.299661 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.327649 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:41 crc kubenswrapper[4865]: E0123 12:14:41.328046 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="ceilometer-notification-agent" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.328065 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="ceilometer-notification-agent" Jan 23 12:14:41 crc kubenswrapper[4865]: E0123 12:14:41.328082 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="sg-core" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.328088 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="sg-core" Jan 23 12:14:41 crc kubenswrapper[4865]: E0123 12:14:41.328103 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="ceilometer-central-agent" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.328108 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="ceilometer-central-agent" Jan 23 12:14:41 crc kubenswrapper[4865]: E0123 12:14:41.328137 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="proxy-httpd" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.328143 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="proxy-httpd" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.328309 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="sg-core" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.328325 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="ceilometer-central-agent" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.328338 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="ceilometer-notification-agent" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.328348 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" containerName="proxy-httpd" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.330312 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.336283 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.338379 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.348508 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.351412 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv6mx\" (UniqueName: \"kubernetes.io/projected/168185de-de1c-45e3-9a69-9f2145bc2371-kube-api-access-dv6mx\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.351441 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.351465 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-run-httpd\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.351491 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-scripts\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.351517 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-log-httpd\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.351545 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.351574 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-config-data\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.453087 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv6mx\" (UniqueName: \"kubernetes.io/projected/168185de-de1c-45e3-9a69-9f2145bc2371-kube-api-access-dv6mx\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.453138 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.453169 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-run-httpd\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.453199 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-scripts\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.453229 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-log-httpd\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.453266 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.453298 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-config-data\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.454440 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-log-httpd\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.454693 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-run-httpd\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.457750 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-scripts\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.458006 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-config-data\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.459527 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.468366 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.475054 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv6mx\" (UniqueName: \"kubernetes.io/projected/168185de-de1c-45e3-9a69-9f2145bc2371-kube-api-access-dv6mx\") pod \"ceilometer-0\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " pod="openstack/ceilometer-0" Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.586987 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:41 crc kubenswrapper[4865]: I0123 12:14:41.587857 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:14:42 crc kubenswrapper[4865]: I0123 12:14:42.098343 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:14:42 crc kubenswrapper[4865]: I0123 12:14:42.133837 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1ceb32d-51bc-4677-936f-3a48a11624cc" path="/var/lib/kubelet/pods/d1ceb32d-51bc-4677-936f-3a48a11624cc/volumes" Jan 23 12:14:42 crc kubenswrapper[4865]: I0123 12:14:42.908716 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"168185de-de1c-45e3-9a69-9f2145bc2371","Type":"ContainerStarted","Data":"923b6e9e565fdb988a2f8d990e1e86c2e19508cf72c2a3e42cae87c10c23d95d"} Jan 23 12:14:43 crc kubenswrapper[4865]: I0123 12:14:43.919250 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"168185de-de1c-45e3-9a69-9f2145bc2371","Type":"ContainerStarted","Data":"7bd8a803a25e2647c57845ebc2b7714d2c3e5d41514dc833807fd13b1f1738eb"} Jan 23 12:14:43 crc kubenswrapper[4865]: I0123 12:14:43.919830 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"168185de-de1c-45e3-9a69-9f2145bc2371","Type":"ContainerStarted","Data":"e721e28826127b009e280713d649c0204f4eff3a98ae684545cf60e0aa47a9aa"} Jan 23 12:14:44 crc kubenswrapper[4865]: I0123 12:14:44.089670 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:14:47 crc kubenswrapper[4865]: I0123 12:14:47.956345 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"168185de-de1c-45e3-9a69-9f2145bc2371","Type":"ContainerStarted","Data":"2fbf48865b8f64c9eb9cc0bcf5d39659dadb1539b29971cbb0e954093b66ac10"} Jan 23 12:14:51 crc kubenswrapper[4865]: I0123 12:14:51.993445 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"168185de-de1c-45e3-9a69-9f2145bc2371","Type":"ContainerStarted","Data":"3d4a7bc25f2ef511d838766fef7620ba7f9a741899bc2fab40b6e69dcc0773ba"} Jan 23 12:14:51 crc kubenswrapper[4865]: I0123 12:14:51.994239 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="ceilometer-central-agent" containerID="cri-o://e721e28826127b009e280713d649c0204f4eff3a98ae684545cf60e0aa47a9aa" gracePeriod=30 Jan 23 12:14:51 crc kubenswrapper[4865]: I0123 12:14:51.994521 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 12:14:51 crc kubenswrapper[4865]: I0123 12:14:51.994739 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="proxy-httpd" containerID="cri-o://3d4a7bc25f2ef511d838766fef7620ba7f9a741899bc2fab40b6e69dcc0773ba" gracePeriod=30 Jan 23 12:14:51 crc kubenswrapper[4865]: I0123 12:14:51.994785 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="ceilometer-notification-agent" containerID="cri-o://7bd8a803a25e2647c57845ebc2b7714d2c3e5d41514dc833807fd13b1f1738eb" gracePeriod=30 Jan 23 12:14:51 crc kubenswrapper[4865]: I0123 12:14:51.994887 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="sg-core" containerID="cri-o://2fbf48865b8f64c9eb9cc0bcf5d39659dadb1539b29971cbb0e954093b66ac10" gracePeriod=30 Jan 23 12:14:52 crc kubenswrapper[4865]: I0123 12:14:52.020466 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.997774218 podStartE2EDuration="11.020447696s" podCreationTimestamp="2026-01-23 12:14:41 +0000 UTC" firstStartedPulling="2026-01-23 12:14:42.07566728 +0000 UTC m=+1326.244739506" lastFinishedPulling="2026-01-23 12:14:50.098340758 +0000 UTC m=+1334.267412984" observedRunningTime="2026-01-23 12:14:52.016571343 +0000 UTC m=+1336.185643569" watchObservedRunningTime="2026-01-23 12:14:52.020447696 +0000 UTC m=+1336.189519912" Jan 23 12:14:53 crc kubenswrapper[4865]: I0123 12:14:53.003162 4865 generic.go:334] "Generic (PLEG): container finished" podID="168185de-de1c-45e3-9a69-9f2145bc2371" containerID="2fbf48865b8f64c9eb9cc0bcf5d39659dadb1539b29971cbb0e954093b66ac10" exitCode=2 Jan 23 12:14:53 crc kubenswrapper[4865]: I0123 12:14:53.003446 4865 generic.go:334] "Generic (PLEG): container finished" podID="168185de-de1c-45e3-9a69-9f2145bc2371" containerID="7bd8a803a25e2647c57845ebc2b7714d2c3e5d41514dc833807fd13b1f1738eb" exitCode=0 Jan 23 12:14:53 crc kubenswrapper[4865]: I0123 12:14:53.003467 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"168185de-de1c-45e3-9a69-9f2145bc2371","Type":"ContainerDied","Data":"2fbf48865b8f64c9eb9cc0bcf5d39659dadb1539b29971cbb0e954093b66ac10"} Jan 23 12:14:53 crc kubenswrapper[4865]: I0123 12:14:53.003493 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"168185de-de1c-45e3-9a69-9f2145bc2371","Type":"ContainerDied","Data":"7bd8a803a25e2647c57845ebc2b7714d2c3e5d41514dc833807fd13b1f1738eb"} Jan 23 12:14:55 crc kubenswrapper[4865]: I0123 12:14:55.041254 4865 generic.go:334] "Generic (PLEG): container finished" podID="168185de-de1c-45e3-9a69-9f2145bc2371" containerID="e721e28826127b009e280713d649c0204f4eff3a98ae684545cf60e0aa47a9aa" exitCode=0 Jan 23 12:14:55 crc kubenswrapper[4865]: I0123 12:14:55.041346 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"168185de-de1c-45e3-9a69-9f2145bc2371","Type":"ContainerDied","Data":"e721e28826127b009e280713d649c0204f4eff3a98ae684545cf60e0aa47a9aa"} Jan 23 12:14:58 crc kubenswrapper[4865]: I0123 12:14:58.065730 4865 generic.go:334] "Generic (PLEG): container finished" podID="bb70ef0d-40c1-4ee9-b73e-98b471e378c2" containerID="e5decde60c0e82eae35d75f68343d15038384d920e1d819c22bed60ed4575d97" exitCode=0 Jan 23 12:14:58 crc kubenswrapper[4865]: I0123 12:14:58.065953 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rq28k" event={"ID":"bb70ef0d-40c1-4ee9-b73e-98b471e378c2","Type":"ContainerDied","Data":"e5decde60c0e82eae35d75f68343d15038384d920e1d819c22bed60ed4575d97"} Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.400210 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.501976 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-combined-ca-bundle\") pod \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.502055 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-config-data\") pod \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.502151 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84mpg\" (UniqueName: \"kubernetes.io/projected/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-kube-api-access-84mpg\") pod \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.502230 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-scripts\") pod \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\" (UID: \"bb70ef0d-40c1-4ee9-b73e-98b471e378c2\") " Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.509425 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-scripts" (OuterVolumeSpecName: "scripts") pod "bb70ef0d-40c1-4ee9-b73e-98b471e378c2" (UID: "bb70ef0d-40c1-4ee9-b73e-98b471e378c2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.513978 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-kube-api-access-84mpg" (OuterVolumeSpecName: "kube-api-access-84mpg") pod "bb70ef0d-40c1-4ee9-b73e-98b471e378c2" (UID: "bb70ef0d-40c1-4ee9-b73e-98b471e378c2"). InnerVolumeSpecName "kube-api-access-84mpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.532353 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb70ef0d-40c1-4ee9-b73e-98b471e378c2" (UID: "bb70ef0d-40c1-4ee9-b73e-98b471e378c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.533371 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-config-data" (OuterVolumeSpecName: "config-data") pod "bb70ef0d-40c1-4ee9-b73e-98b471e378c2" (UID: "bb70ef0d-40c1-4ee9-b73e-98b471e378c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.605140 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84mpg\" (UniqueName: \"kubernetes.io/projected/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-kube-api-access-84mpg\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.605167 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.605177 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:14:59 crc kubenswrapper[4865]: I0123 12:14:59.605187 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb70ef0d-40c1-4ee9-b73e-98b471e378c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.085257 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rq28k" event={"ID":"bb70ef0d-40c1-4ee9-b73e-98b471e378c2","Type":"ContainerDied","Data":"2c14046e1a28a6f22725271885caca8c6aa98cab9311ed84d85b472cae7b5308"} Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.085294 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c14046e1a28a6f22725271885caca8c6aa98cab9311ed84d85b472cae7b5308" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.085343 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rq28k" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.221624 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7"] Jan 23 12:15:00 crc kubenswrapper[4865]: E0123 12:15:00.221945 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb70ef0d-40c1-4ee9-b73e-98b471e378c2" containerName="nova-cell0-conductor-db-sync" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.221963 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb70ef0d-40c1-4ee9-b73e-98b471e378c2" containerName="nova-cell0-conductor-db-sync" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.222145 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb70ef0d-40c1-4ee9-b73e-98b471e378c2" containerName="nova-cell0-conductor-db-sync" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.222903 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.226312 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.226318 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.249148 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7"] Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.294729 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.298447 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.308965 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.312817 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5fbhx" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.328379 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.343313 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl7bb\" (UniqueName: \"kubernetes.io/projected/0efc0078-ae1e-44d9-b57f-361da731424b-kube-api-access-rl7bb\") pod \"collect-profiles-29486175-j8lb7\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.343445 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0efc0078-ae1e-44d9-b57f-361da731424b-secret-volume\") pod \"collect-profiles-29486175-j8lb7\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.343678 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efc0078-ae1e-44d9-b57f-361da731424b-config-volume\") pod \"collect-profiles-29486175-j8lb7\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: E0123 12:15:00.398207 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb70ef0d_40c1_4ee9_b73e_98b471e378c2.slice/crio-2c14046e1a28a6f22725271885caca8c6aa98cab9311ed84d85b472cae7b5308\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb70ef0d_40c1_4ee9_b73e_98b471e378c2.slice\": RecentStats: unable to find data in memory cache]" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.445780 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbqdc\" (UniqueName: \"kubernetes.io/projected/d83adf6b-3978-4fc7-ba01-eb6f54c87903-kube-api-access-xbqdc\") pod \"nova-cell0-conductor-0\" (UID: \"d83adf6b-3978-4fc7-ba01-eb6f54c87903\") " pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.446134 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efc0078-ae1e-44d9-b57f-361da731424b-config-volume\") pod \"collect-profiles-29486175-j8lb7\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.446213 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83adf6b-3978-4fc7-ba01-eb6f54c87903-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d83adf6b-3978-4fc7-ba01-eb6f54c87903\") " pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.446558 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl7bb\" (UniqueName: \"kubernetes.io/projected/0efc0078-ae1e-44d9-b57f-361da731424b-kube-api-access-rl7bb\") pod \"collect-profiles-29486175-j8lb7\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.446682 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0efc0078-ae1e-44d9-b57f-361da731424b-secret-volume\") pod \"collect-profiles-29486175-j8lb7\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.446779 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83adf6b-3978-4fc7-ba01-eb6f54c87903-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d83adf6b-3978-4fc7-ba01-eb6f54c87903\") " pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.447133 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efc0078-ae1e-44d9-b57f-361da731424b-config-volume\") pod \"collect-profiles-29486175-j8lb7\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.458849 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0efc0078-ae1e-44d9-b57f-361da731424b-secret-volume\") pod \"collect-profiles-29486175-j8lb7\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.466660 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl7bb\" (UniqueName: \"kubernetes.io/projected/0efc0078-ae1e-44d9-b57f-361da731424b-kube-api-access-rl7bb\") pod \"collect-profiles-29486175-j8lb7\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.548867 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbqdc\" (UniqueName: \"kubernetes.io/projected/d83adf6b-3978-4fc7-ba01-eb6f54c87903-kube-api-access-xbqdc\") pod \"nova-cell0-conductor-0\" (UID: \"d83adf6b-3978-4fc7-ba01-eb6f54c87903\") " pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.549226 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83adf6b-3978-4fc7-ba01-eb6f54c87903-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d83adf6b-3978-4fc7-ba01-eb6f54c87903\") " pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.549583 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83adf6b-3978-4fc7-ba01-eb6f54c87903-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d83adf6b-3978-4fc7-ba01-eb6f54c87903\") " pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.553292 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83adf6b-3978-4fc7-ba01-eb6f54c87903-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d83adf6b-3978-4fc7-ba01-eb6f54c87903\") " pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.554302 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83adf6b-3978-4fc7-ba01-eb6f54c87903-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d83adf6b-3978-4fc7-ba01-eb6f54c87903\") " pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.568174 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbqdc\" (UniqueName: \"kubernetes.io/projected/d83adf6b-3978-4fc7-ba01-eb6f54c87903-kube-api-access-xbqdc\") pod \"nova-cell0-conductor-0\" (UID: \"d83adf6b-3978-4fc7-ba01-eb6f54c87903\") " pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.580264 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:00 crc kubenswrapper[4865]: I0123 12:15:00.645400 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:01 crc kubenswrapper[4865]: I0123 12:15:01.096497 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7"] Jan 23 12:15:01 crc kubenswrapper[4865]: W0123 12:15:01.104012 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0efc0078_ae1e_44d9_b57f_361da731424b.slice/crio-1c4f1c56af4cb7e6ced2b0959f28a6502ed789f7bfa3ab1d7d22c085bec4a626 WatchSource:0}: Error finding container 1c4f1c56af4cb7e6ced2b0959f28a6502ed789f7bfa3ab1d7d22c085bec4a626: Status 404 returned error can't find the container with id 1c4f1c56af4cb7e6ced2b0959f28a6502ed789f7bfa3ab1d7d22c085bec4a626 Jan 23 12:15:01 crc kubenswrapper[4865]: I0123 12:15:01.261102 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 23 12:15:01 crc kubenswrapper[4865]: W0123 12:15:01.264274 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd83adf6b_3978_4fc7_ba01_eb6f54c87903.slice/crio-f720be5f437b40ca22fa9c1e1198d2c006a10df8be845fc7d4a193363f50bde2 WatchSource:0}: Error finding container f720be5f437b40ca22fa9c1e1198d2c006a10df8be845fc7d4a193363f50bde2: Status 404 returned error can't find the container with id f720be5f437b40ca22fa9c1e1198d2c006a10df8be845fc7d4a193363f50bde2 Jan 23 12:15:02 crc kubenswrapper[4865]: I0123 12:15:02.109872 4865 generic.go:334] "Generic (PLEG): container finished" podID="0efc0078-ae1e-44d9-b57f-361da731424b" containerID="ded0fc76f555bd38b9f49579107bacabaacd9eb9988c3eecea20fcdd7a7ae28b" exitCode=0 Jan 23 12:15:02 crc kubenswrapper[4865]: I0123 12:15:02.110064 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" event={"ID":"0efc0078-ae1e-44d9-b57f-361da731424b","Type":"ContainerDied","Data":"ded0fc76f555bd38b9f49579107bacabaacd9eb9988c3eecea20fcdd7a7ae28b"} Jan 23 12:15:02 crc kubenswrapper[4865]: I0123 12:15:02.110229 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" event={"ID":"0efc0078-ae1e-44d9-b57f-361da731424b","Type":"ContainerStarted","Data":"1c4f1c56af4cb7e6ced2b0959f28a6502ed789f7bfa3ab1d7d22c085bec4a626"} Jan 23 12:15:02 crc kubenswrapper[4865]: I0123 12:15:02.112108 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d83adf6b-3978-4fc7-ba01-eb6f54c87903","Type":"ContainerStarted","Data":"fd9f95f8b1423d4777d292fb8c812a8287c7737a1eb3d6149b48d4515372a295"} Jan 23 12:15:02 crc kubenswrapper[4865]: I0123 12:15:02.112153 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d83adf6b-3978-4fc7-ba01-eb6f54c87903","Type":"ContainerStarted","Data":"f720be5f437b40ca22fa9c1e1198d2c006a10df8be845fc7d4a193363f50bde2"} Jan 23 12:15:02 crc kubenswrapper[4865]: I0123 12:15:02.112231 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:02 crc kubenswrapper[4865]: I0123 12:15:02.166291 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.166273328 podStartE2EDuration="2.166273328s" podCreationTimestamp="2026-01-23 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:02.160647692 +0000 UTC m=+1346.329719918" watchObservedRunningTime="2026-01-23 12:15:02.166273328 +0000 UTC m=+1346.335345554" Jan 23 12:15:03 crc kubenswrapper[4865]: I0123 12:15:03.448510 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:03 crc kubenswrapper[4865]: I0123 12:15:03.501734 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efc0078-ae1e-44d9-b57f-361da731424b-config-volume\") pod \"0efc0078-ae1e-44d9-b57f-361da731424b\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " Jan 23 12:15:03 crc kubenswrapper[4865]: I0123 12:15:03.501975 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl7bb\" (UniqueName: \"kubernetes.io/projected/0efc0078-ae1e-44d9-b57f-361da731424b-kube-api-access-rl7bb\") pod \"0efc0078-ae1e-44d9-b57f-361da731424b\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " Jan 23 12:15:03 crc kubenswrapper[4865]: I0123 12:15:03.502121 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0efc0078-ae1e-44d9-b57f-361da731424b-secret-volume\") pod \"0efc0078-ae1e-44d9-b57f-361da731424b\" (UID: \"0efc0078-ae1e-44d9-b57f-361da731424b\") " Jan 23 12:15:03 crc kubenswrapper[4865]: I0123 12:15:03.504982 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0efc0078-ae1e-44d9-b57f-361da731424b-config-volume" (OuterVolumeSpecName: "config-volume") pod "0efc0078-ae1e-44d9-b57f-361da731424b" (UID: "0efc0078-ae1e-44d9-b57f-361da731424b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:15:03 crc kubenswrapper[4865]: I0123 12:15:03.508617 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0efc0078-ae1e-44d9-b57f-361da731424b-kube-api-access-rl7bb" (OuterVolumeSpecName: "kube-api-access-rl7bb") pod "0efc0078-ae1e-44d9-b57f-361da731424b" (UID: "0efc0078-ae1e-44d9-b57f-361da731424b"). InnerVolumeSpecName "kube-api-access-rl7bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:03 crc kubenswrapper[4865]: I0123 12:15:03.525066 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0efc0078-ae1e-44d9-b57f-361da731424b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0efc0078-ae1e-44d9-b57f-361da731424b" (UID: "0efc0078-ae1e-44d9-b57f-361da731424b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:03 crc kubenswrapper[4865]: I0123 12:15:03.604416 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl7bb\" (UniqueName: \"kubernetes.io/projected/0efc0078-ae1e-44d9-b57f-361da731424b-kube-api-access-rl7bb\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:03 crc kubenswrapper[4865]: I0123 12:15:03.604453 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0efc0078-ae1e-44d9-b57f-361da731424b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:03 crc kubenswrapper[4865]: I0123 12:15:03.604462 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0efc0078-ae1e-44d9-b57f-361da731424b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:04 crc kubenswrapper[4865]: I0123 12:15:04.130682 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" Jan 23 12:15:04 crc kubenswrapper[4865]: I0123 12:15:04.130590 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7" event={"ID":"0efc0078-ae1e-44d9-b57f-361da731424b","Type":"ContainerDied","Data":"1c4f1c56af4cb7e6ced2b0959f28a6502ed789f7bfa3ab1d7d22c085bec4a626"} Jan 23 12:15:04 crc kubenswrapper[4865]: I0123 12:15:04.138821 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c4f1c56af4cb7e6ced2b0959f28a6502ed789f7bfa3ab1d7d22c085bec4a626" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.170196 4865 generic.go:334] "Generic (PLEG): container finished" podID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerID="196d053e89c6c1f701d070483f296d52d75d27bc87e6b3dab1359d2a12168ca1" exitCode=137 Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.170712 4865 generic.go:334] "Generic (PLEG): container finished" podID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerID="a9f5b45dcc5b04b3bf3ecb6680aae49876c6d666882bf3eeb621de8ccd4a8a85" exitCode=137 Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.170369 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerDied","Data":"196d053e89c6c1f701d070483f296d52d75d27bc87e6b3dab1359d2a12168ca1"} Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.170757 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerDied","Data":"a9f5b45dcc5b04b3bf3ecb6680aae49876c6d666882bf3eeb621de8ccd4a8a85"} Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.170781 4865 scope.go:117] "RemoveContainer" containerID="773f43c7fc3b7b930164eb5a9391098dcc2b2866277b673db9ee6522e2b623e3" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.445015 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.603256 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-tls-certs\") pod \"581ecfce-2612-48aa-beeb-a41024ef2b6b\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.603371 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-combined-ca-bundle\") pod \"581ecfce-2612-48aa-beeb-a41024ef2b6b\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.603410 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-secret-key\") pod \"581ecfce-2612-48aa-beeb-a41024ef2b6b\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.603440 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-config-data\") pod \"581ecfce-2612-48aa-beeb-a41024ef2b6b\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.603458 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9brh\" (UniqueName: \"kubernetes.io/projected/581ecfce-2612-48aa-beeb-a41024ef2b6b-kube-api-access-d9brh\") pod \"581ecfce-2612-48aa-beeb-a41024ef2b6b\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.603509 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-scripts\") pod \"581ecfce-2612-48aa-beeb-a41024ef2b6b\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.603546 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/581ecfce-2612-48aa-beeb-a41024ef2b6b-logs\") pod \"581ecfce-2612-48aa-beeb-a41024ef2b6b\" (UID: \"581ecfce-2612-48aa-beeb-a41024ef2b6b\") " Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.604272 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/581ecfce-2612-48aa-beeb-a41024ef2b6b-logs" (OuterVolumeSpecName: "logs") pod "581ecfce-2612-48aa-beeb-a41024ef2b6b" (UID: "581ecfce-2612-48aa-beeb-a41024ef2b6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.604693 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/581ecfce-2612-48aa-beeb-a41024ef2b6b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.611878 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581ecfce-2612-48aa-beeb-a41024ef2b6b-kube-api-access-d9brh" (OuterVolumeSpecName: "kube-api-access-d9brh") pod "581ecfce-2612-48aa-beeb-a41024ef2b6b" (UID: "581ecfce-2612-48aa-beeb-a41024ef2b6b"). InnerVolumeSpecName "kube-api-access-d9brh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.611955 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "581ecfce-2612-48aa-beeb-a41024ef2b6b" (UID: "581ecfce-2612-48aa-beeb-a41024ef2b6b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.634165 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "581ecfce-2612-48aa-beeb-a41024ef2b6b" (UID: "581ecfce-2612-48aa-beeb-a41024ef2b6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.635682 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-config-data" (OuterVolumeSpecName: "config-data") pod "581ecfce-2612-48aa-beeb-a41024ef2b6b" (UID: "581ecfce-2612-48aa-beeb-a41024ef2b6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.637330 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-scripts" (OuterVolumeSpecName: "scripts") pod "581ecfce-2612-48aa-beeb-a41024ef2b6b" (UID: "581ecfce-2612-48aa-beeb-a41024ef2b6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.665101 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "581ecfce-2612-48aa-beeb-a41024ef2b6b" (UID: "581ecfce-2612-48aa-beeb-a41024ef2b6b"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.706971 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.707010 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.707021 4865 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/581ecfce-2612-48aa-beeb-a41024ef2b6b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.707032 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.707042 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9brh\" (UniqueName: \"kubernetes.io/projected/581ecfce-2612-48aa-beeb-a41024ef2b6b-kube-api-access-d9brh\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:09 crc kubenswrapper[4865]: I0123 12:15:09.707055 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/581ecfce-2612-48aa-beeb-a41024ef2b6b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:10 crc kubenswrapper[4865]: I0123 12:15:10.187317 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d44bd7746-lpzlt" event={"ID":"581ecfce-2612-48aa-beeb-a41024ef2b6b","Type":"ContainerDied","Data":"fc4db0e600059eee9eabcc3820b513d6ea23e26f15324de6b742c4fec5ec42ac"} Jan 23 12:15:10 crc kubenswrapper[4865]: I0123 12:15:10.187361 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d44bd7746-lpzlt" Jan 23 12:15:10 crc kubenswrapper[4865]: I0123 12:15:10.187375 4865 scope.go:117] "RemoveContainer" containerID="196d053e89c6c1f701d070483f296d52d75d27bc87e6b3dab1359d2a12168ca1" Jan 23 12:15:10 crc kubenswrapper[4865]: I0123 12:15:10.218708 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d44bd7746-lpzlt"] Jan 23 12:15:10 crc kubenswrapper[4865]: I0123 12:15:10.229783 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d44bd7746-lpzlt"] Jan 23 12:15:10 crc kubenswrapper[4865]: I0123 12:15:10.383756 4865 scope.go:117] "RemoveContainer" containerID="a9f5b45dcc5b04b3bf3ecb6680aae49876c6d666882bf3eeb621de8ccd4a8a85" Jan 23 12:15:10 crc kubenswrapper[4865]: I0123 12:15:10.671850 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123096 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-d9nb4"] Jan 23 12:15:11 crc kubenswrapper[4865]: E0123 12:15:11.123540 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0efc0078-ae1e-44d9-b57f-361da731424b" containerName="collect-profiles" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123556 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0efc0078-ae1e-44d9-b57f-361da731424b" containerName="collect-profiles" Jan 23 12:15:11 crc kubenswrapper[4865]: E0123 12:15:11.123569 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon-log" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123577 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon-log" Jan 23 12:15:11 crc kubenswrapper[4865]: E0123 12:15:11.123613 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123620 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: E0123 12:15:11.123635 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123642 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: E0123 12:15:11.123651 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123657 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123843 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123854 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon-log" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123867 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123892 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123902 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.123911 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0efc0078-ae1e-44d9-b57f-361da731424b" containerName="collect-profiles" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.124478 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.126558 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.127318 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.132031 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.132152 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-scripts\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.132174 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm972\" (UniqueName: \"kubernetes.io/projected/5c09d8bf-3db6-47b0-b099-fe6be61d003f-kube-api-access-tm972\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.132227 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-config-data\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.135318 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-d9nb4"] Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.233459 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-config-data\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.233575 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.233666 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-scripts\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.233684 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm972\" (UniqueName: \"kubernetes.io/projected/5c09d8bf-3db6-47b0-b099-fe6be61d003f-kube-api-access-tm972\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.240982 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-config-data\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.241205 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.244077 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-scripts\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.272671 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm972\" (UniqueName: \"kubernetes.io/projected/5c09d8bf-3db6-47b0-b099-fe6be61d003f-kube-api-access-tm972\") pod \"nova-cell0-cell-mapping-d9nb4\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.350482 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:11 crc kubenswrapper[4865]: E0123 12:15:11.358027 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.358160 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" containerName="horizon" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.359483 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.376813 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.380672 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.382082 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.386401 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.398920 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.427286 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.437234 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.437283 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.437346 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-config-data\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.437365 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjb49\" (UniqueName: \"kubernetes.io/projected/91128024-d938-4a6e-9c1d-b701f716a1e2-kube-api-access-fjb49\") pod \"nova-cell1-novncproxy-0\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.437392 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-logs\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.437438 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2flf\" (UniqueName: \"kubernetes.io/projected/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-kube-api-access-k2flf\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.437467 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.479637 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.485873 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.490636 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.496363 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.539500 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2flf\" (UniqueName: \"kubernetes.io/projected/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-kube-api-access-k2flf\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.539571 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.539623 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.539663 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.539774 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-config-data\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.539793 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjb49\" (UniqueName: \"kubernetes.io/projected/91128024-d938-4a6e-9c1d-b701f716a1e2-kube-api-access-fjb49\") pod \"nova-cell1-novncproxy-0\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.539832 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-logs\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.540285 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-logs\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.553281 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.554041 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-config-data\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.554373 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.572511 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.581411 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.591694 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2flf\" (UniqueName: \"kubernetes.io/projected/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-kube-api-access-k2flf\") pod \"nova-api-0\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.615105 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjb49\" (UniqueName: \"kubernetes.io/projected/91128024-d938-4a6e-9c1d-b701f716a1e2-kube-api-access-fjb49\") pod \"nova-cell1-novncproxy-0\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.626925 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.640941 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-config-data\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.641012 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30259ac0-b395-4f88-9b6f-c397ea95590a-logs\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.641052 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbx47\" (UniqueName: \"kubernetes.io/projected/30259ac0-b395-4f88-9b6f-c397ea95590a-kube-api-access-bbx47\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.641092 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.690296 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.691913 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.703153 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.706839 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.721226 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.737302 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.751472 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbx47\" (UniqueName: \"kubernetes.io/projected/30259ac0-b395-4f88-9b6f-c397ea95590a-kube-api-access-bbx47\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.751546 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnm27\" (UniqueName: \"kubernetes.io/projected/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-kube-api-access-hnm27\") pod \"nova-scheduler-0\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.751632 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.751872 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-config-data\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.751930 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.751993 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30259ac0-b395-4f88-9b6f-c397ea95590a-logs\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.752022 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-config-data\") pod \"nova-scheduler-0\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.753527 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30259ac0-b395-4f88-9b6f-c397ea95590a-logs\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.758576 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bbfd6cbff-hnkp4"] Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.760110 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.777534 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.789923 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bbfd6cbff-hnkp4"] Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.792983 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbx47\" (UniqueName: \"kubernetes.io/projected/30259ac0-b395-4f88-9b6f-c397ea95590a-kube-api-access-bbx47\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.805019 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-config-data\") pod \"nova-metadata-0\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.811932 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.855525 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.856022 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-config-data\") pod \"nova-scheduler-0\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.856191 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnm27\" (UniqueName: \"kubernetes.io/projected/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-kube-api-access-hnm27\") pod \"nova-scheduler-0\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.870745 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-config-data\") pod \"nova-scheduler-0\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.874100 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.895054 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnm27\" (UniqueName: \"kubernetes.io/projected/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-kube-api-access-hnm27\") pod \"nova-scheduler-0\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.969380 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.969465 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-svc\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.969713 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-config\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.969811 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggkxs\" (UniqueName: \"kubernetes.io/projected/c29c1861-9f5c-484e-87b9-13f34ea426d5-kube-api-access-ggkxs\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.970048 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:11 crc kubenswrapper[4865]: I0123 12:15:11.970280 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-swift-storage-0\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.080060 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.080109 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-svc\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.080244 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-config\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.080290 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggkxs\" (UniqueName: \"kubernetes.io/projected/c29c1861-9f5c-484e-87b9-13f34ea426d5-kube-api-access-ggkxs\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.080413 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.080524 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-swift-storage-0\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.082024 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-swift-storage-0\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.082541 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.089043 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.109764 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.110900 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-svc\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.112072 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-config\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.170677 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggkxs\" (UniqueName: \"kubernetes.io/projected/c29c1861-9f5c-484e-87b9-13f34ea426d5-kube-api-access-ggkxs\") pod \"dnsmasq-dns-5bbfd6cbff-hnkp4\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.200352 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="581ecfce-2612-48aa-beeb-a41024ef2b6b" path="/var/lib/kubelet/pods/581ecfce-2612-48aa-beeb-a41024ef2b6b/volumes" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.318364 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-d9nb4"] Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.422012 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.723613 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.774038 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:12 crc kubenswrapper[4865]: I0123 12:15:12.901431 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.140026 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:15:13 crc kubenswrapper[4865]: W0123 12:15:13.173584 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc731b4c1_d753_46ea_81cd_b49a3ed9afb3.slice/crio-6aa2a6482202a7943216a2ff8ce877540857ce2c4f7418342f4ab0b9b1607bdf WatchSource:0}: Error finding container 6aa2a6482202a7943216a2ff8ce877540857ce2c4f7418342f4ab0b9b1607bdf: Status 404 returned error can't find the container with id 6aa2a6482202a7943216a2ff8ce877540857ce2c4f7418342f4ab0b9b1607bdf Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.238500 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"30259ac0-b395-4f88-9b6f-c397ea95590a","Type":"ContainerStarted","Data":"cb5216c9e8aa5439c50b7bfa174e64c8252296bc46a3060d295b49c0929eb4fb"} Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.252848 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"91128024-d938-4a6e-9c1d-b701f716a1e2","Type":"ContainerStarted","Data":"1670b1027db7d05cfa2f61709955491a75d5a1ce4c54d671ba7e76fbc7469c63"} Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.283464 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d9nb4" event={"ID":"5c09d8bf-3db6-47b0-b099-fe6be61d003f","Type":"ContainerStarted","Data":"ce27f782c75c978fbeed8ac2146424543175e584ca44631991f76da6b731d27c"} Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.283507 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d9nb4" event={"ID":"5c09d8bf-3db6-47b0-b099-fe6be61d003f","Type":"ContainerStarted","Data":"f53fcd18baaa2401d876c06701e9436c50a57ad9a633b9c886d65761cea3f920"} Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.287234 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bbfd6cbff-hnkp4"] Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.292620 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c731b4c1-d753-46ea-81cd-b49a3ed9afb3","Type":"ContainerStarted","Data":"6aa2a6482202a7943216a2ff8ce877540857ce2c4f7418342f4ab0b9b1607bdf"} Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.303801 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5dee3d-cfa7-4474-a3e8-83c6b956636f","Type":"ContainerStarted","Data":"2579acd22369c73176045d5bfbbc4e12292b5ad18b7fbef10789c5974b8b4062"} Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.313065 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-d9nb4" podStartSLOduration=2.31304916 podStartE2EDuration="2.31304916s" podCreationTimestamp="2026-01-23 12:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:13.311293988 +0000 UTC m=+1357.480366214" watchObservedRunningTime="2026-01-23 12:15:13.31304916 +0000 UTC m=+1357.482121406" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.449077 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zd6bk"] Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.451142 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.459902 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.460105 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.482287 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zd6bk"] Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.618808 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-scripts\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.618900 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.618938 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-config-data\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.619192 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wdqp\" (UniqueName: \"kubernetes.io/projected/88f42439-e9e1-4de1-93fa-22a56502e805-kube-api-access-8wdqp\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.721585 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wdqp\" (UniqueName: \"kubernetes.io/projected/88f42439-e9e1-4de1-93fa-22a56502e805-kube-api-access-8wdqp\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.721738 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-scripts\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.721848 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.721913 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-config-data\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.728750 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-scripts\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.729472 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-config-data\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.730383 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.742204 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wdqp\" (UniqueName: \"kubernetes.io/projected/88f42439-e9e1-4de1-93fa-22a56502e805-kube-api-access-8wdqp\") pod \"nova-cell1-conductor-db-sync-zd6bk\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:13 crc kubenswrapper[4865]: I0123 12:15:13.888176 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:14 crc kubenswrapper[4865]: I0123 12:15:14.319092 4865 generic.go:334] "Generic (PLEG): container finished" podID="c29c1861-9f5c-484e-87b9-13f34ea426d5" containerID="0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903" exitCode=0 Jan 23 12:15:14 crc kubenswrapper[4865]: I0123 12:15:14.319708 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" event={"ID":"c29c1861-9f5c-484e-87b9-13f34ea426d5","Type":"ContainerDied","Data":"0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903"} Jan 23 12:15:14 crc kubenswrapper[4865]: I0123 12:15:14.319825 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" event={"ID":"c29c1861-9f5c-484e-87b9-13f34ea426d5","Type":"ContainerStarted","Data":"ab5644c745ec96a554cb2d241a72759ff1dec8b87115cda9f8b03b4af51f998f"} Jan 23 12:15:14 crc kubenswrapper[4865]: I0123 12:15:14.632080 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zd6bk"] Jan 23 12:15:15 crc kubenswrapper[4865]: I0123 12:15:15.368862 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zd6bk" event={"ID":"88f42439-e9e1-4de1-93fa-22a56502e805","Type":"ContainerStarted","Data":"1be52bc48bdbefd252086db62ae062d61449059896de0d866760ad8f508b4b9a"} Jan 23 12:15:15 crc kubenswrapper[4865]: I0123 12:15:15.369256 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zd6bk" event={"ID":"88f42439-e9e1-4de1-93fa-22a56502e805","Type":"ContainerStarted","Data":"90a695934eabed59fbcc187627323e7c39f00cd723a5d03204afef8fc915dd43"} Jan 23 12:15:15 crc kubenswrapper[4865]: I0123 12:15:15.373501 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 12:15:15 crc kubenswrapper[4865]: I0123 12:15:15.380867 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" event={"ID":"c29c1861-9f5c-484e-87b9-13f34ea426d5","Type":"ContainerStarted","Data":"ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862"} Jan 23 12:15:15 crc kubenswrapper[4865]: I0123 12:15:15.388381 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:16 crc kubenswrapper[4865]: I0123 12:15:16.392882 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:16 crc kubenswrapper[4865]: I0123 12:15:16.414302 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zd6bk" podStartSLOduration=3.414286383 podStartE2EDuration="3.414286383s" podCreationTimestamp="2026-01-23 12:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:16.413454953 +0000 UTC m=+1360.582527179" watchObservedRunningTime="2026-01-23 12:15:16.414286383 +0000 UTC m=+1360.583358609" Jan 23 12:15:16 crc kubenswrapper[4865]: I0123 12:15:16.431795 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" podStartSLOduration=5.431772404 podStartE2EDuration="5.431772404s" podCreationTimestamp="2026-01-23 12:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:16.430088744 +0000 UTC m=+1360.599160970" watchObservedRunningTime="2026-01-23 12:15:16.431772404 +0000 UTC m=+1360.600844630" Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.480425 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"30259ac0-b395-4f88-9b6f-c397ea95590a","Type":"ContainerStarted","Data":"e6fccf5557ca02c9f4fa538ef29fb8f01efaf72e0eba3385e48e1093db929784"} Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.480834 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="30259ac0-b395-4f88-9b6f-c397ea95590a" containerName="nova-metadata-log" containerID="cri-o://c913e4d1e0d27f1092bf4b3ceb4c59b4b63e4db2ce2f33e8cacefd55c6657bd9" gracePeriod=30 Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.480882 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"30259ac0-b395-4f88-9b6f-c397ea95590a","Type":"ContainerStarted","Data":"c913e4d1e0d27f1092bf4b3ceb4c59b4b63e4db2ce2f33e8cacefd55c6657bd9"} Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.480939 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="30259ac0-b395-4f88-9b6f-c397ea95590a" containerName="nova-metadata-metadata" containerID="cri-o://e6fccf5557ca02c9f4fa538ef29fb8f01efaf72e0eba3385e48e1093db929784" gracePeriod=30 Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.487665 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="91128024-d938-4a6e-9c1d-b701f716a1e2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f0310b66599eef15e0ac46471c8223435f9763ab5144a99a3aec6e4ebdcc7180" gracePeriod=30 Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.487767 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"91128024-d938-4a6e-9c1d-b701f716a1e2","Type":"ContainerStarted","Data":"f0310b66599eef15e0ac46471c8223435f9763ab5144a99a3aec6e4ebdcc7180"} Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.489693 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c731b4c1-d753-46ea-81cd-b49a3ed9afb3","Type":"ContainerStarted","Data":"43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900"} Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.493772 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5dee3d-cfa7-4474-a3e8-83c6b956636f","Type":"ContainerStarted","Data":"6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e"} Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.493811 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5dee3d-cfa7-4474-a3e8-83c6b956636f","Type":"ContainerStarted","Data":"db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811"} Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.530233 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.767315897 podStartE2EDuration="7.530208632s" podCreationTimestamp="2026-01-23 12:15:11 +0000 UTC" firstStartedPulling="2026-01-23 12:15:12.758374804 +0000 UTC m=+1356.927447030" lastFinishedPulling="2026-01-23 12:15:17.521267539 +0000 UTC m=+1361.690339765" observedRunningTime="2026-01-23 12:15:18.505060146 +0000 UTC m=+1362.674132382" watchObservedRunningTime="2026-01-23 12:15:18.530208632 +0000 UTC m=+1362.699280868" Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.551922 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8677176060000003 podStartE2EDuration="7.551900635s" podCreationTimestamp="2026-01-23 12:15:11 +0000 UTC" firstStartedPulling="2026-01-23 12:15:12.834286753 +0000 UTC m=+1357.003358969" lastFinishedPulling="2026-01-23 12:15:17.518469772 +0000 UTC m=+1361.687541998" observedRunningTime="2026-01-23 12:15:18.528396568 +0000 UTC m=+1362.697468804" watchObservedRunningTime="2026-01-23 12:15:18.551900635 +0000 UTC m=+1362.720972861" Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.587711 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.256269519 podStartE2EDuration="7.587684547s" podCreationTimestamp="2026-01-23 12:15:11 +0000 UTC" firstStartedPulling="2026-01-23 12:15:13.184256386 +0000 UTC m=+1357.353328612" lastFinishedPulling="2026-01-23 12:15:17.515671414 +0000 UTC m=+1361.684743640" observedRunningTime="2026-01-23 12:15:18.554941328 +0000 UTC m=+1362.724013554" watchObservedRunningTime="2026-01-23 12:15:18.587684547 +0000 UTC m=+1362.756756783" Jan 23 12:15:18 crc kubenswrapper[4865]: I0123 12:15:18.605047 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.012450004 podStartE2EDuration="7.605022895s" podCreationTimestamp="2026-01-23 12:15:11 +0000 UTC" firstStartedPulling="2026-01-23 12:15:12.921832003 +0000 UTC m=+1357.090904229" lastFinishedPulling="2026-01-23 12:15:17.514404894 +0000 UTC m=+1361.683477120" observedRunningTime="2026-01-23 12:15:18.572080201 +0000 UTC m=+1362.741152427" watchObservedRunningTime="2026-01-23 12:15:18.605022895 +0000 UTC m=+1362.774095121" Jan 23 12:15:19 crc kubenswrapper[4865]: I0123 12:15:19.504552 4865 generic.go:334] "Generic (PLEG): container finished" podID="30259ac0-b395-4f88-9b6f-c397ea95590a" containerID="e6fccf5557ca02c9f4fa538ef29fb8f01efaf72e0eba3385e48e1093db929784" exitCode=0 Jan 23 12:15:19 crc kubenswrapper[4865]: I0123 12:15:19.504927 4865 generic.go:334] "Generic (PLEG): container finished" podID="30259ac0-b395-4f88-9b6f-c397ea95590a" containerID="c913e4d1e0d27f1092bf4b3ceb4c59b4b63e4db2ce2f33e8cacefd55c6657bd9" exitCode=143 Jan 23 12:15:19 crc kubenswrapper[4865]: I0123 12:15:19.504632 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"30259ac0-b395-4f88-9b6f-c397ea95590a","Type":"ContainerDied","Data":"e6fccf5557ca02c9f4fa538ef29fb8f01efaf72e0eba3385e48e1093db929784"} Jan 23 12:15:19 crc kubenswrapper[4865]: I0123 12:15:19.505401 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"30259ac0-b395-4f88-9b6f-c397ea95590a","Type":"ContainerDied","Data":"c913e4d1e0d27f1092bf4b3ceb4c59b4b63e4db2ce2f33e8cacefd55c6657bd9"} Jan 23 12:15:19 crc kubenswrapper[4865]: I0123 12:15:19.949793 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.089533 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30259ac0-b395-4f88-9b6f-c397ea95590a-logs\") pod \"30259ac0-b395-4f88-9b6f-c397ea95590a\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.089633 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbx47\" (UniqueName: \"kubernetes.io/projected/30259ac0-b395-4f88-9b6f-c397ea95590a-kube-api-access-bbx47\") pod \"30259ac0-b395-4f88-9b6f-c397ea95590a\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.089766 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-config-data\") pod \"30259ac0-b395-4f88-9b6f-c397ea95590a\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.089818 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-combined-ca-bundle\") pod \"30259ac0-b395-4f88-9b6f-c397ea95590a\" (UID: \"30259ac0-b395-4f88-9b6f-c397ea95590a\") " Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.089955 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30259ac0-b395-4f88-9b6f-c397ea95590a-logs" (OuterVolumeSpecName: "logs") pod "30259ac0-b395-4f88-9b6f-c397ea95590a" (UID: "30259ac0-b395-4f88-9b6f-c397ea95590a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.090364 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30259ac0-b395-4f88-9b6f-c397ea95590a-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.095518 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30259ac0-b395-4f88-9b6f-c397ea95590a-kube-api-access-bbx47" (OuterVolumeSpecName: "kube-api-access-bbx47") pod "30259ac0-b395-4f88-9b6f-c397ea95590a" (UID: "30259ac0-b395-4f88-9b6f-c397ea95590a"). InnerVolumeSpecName "kube-api-access-bbx47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.119022 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-config-data" (OuterVolumeSpecName: "config-data") pod "30259ac0-b395-4f88-9b6f-c397ea95590a" (UID: "30259ac0-b395-4f88-9b6f-c397ea95590a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.131358 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30259ac0-b395-4f88-9b6f-c397ea95590a" (UID: "30259ac0-b395-4f88-9b6f-c397ea95590a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.192088 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbx47\" (UniqueName: \"kubernetes.io/projected/30259ac0-b395-4f88-9b6f-c397ea95590a-kube-api-access-bbx47\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.192120 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.192130 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30259ac0-b395-4f88-9b6f-c397ea95590a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.515523 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"30259ac0-b395-4f88-9b6f-c397ea95590a","Type":"ContainerDied","Data":"cb5216c9e8aa5439c50b7bfa174e64c8252296bc46a3060d295b49c0929eb4fb"} Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.515771 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.516797 4865 scope.go:117] "RemoveContainer" containerID="e6fccf5557ca02c9f4fa538ef29fb8f01efaf72e0eba3385e48e1093db929784" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.545373 4865 scope.go:117] "RemoveContainer" containerID="c913e4d1e0d27f1092bf4b3ceb4c59b4b63e4db2ce2f33e8cacefd55c6657bd9" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.546124 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.558015 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.570875 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:20 crc kubenswrapper[4865]: E0123 12:15:20.571257 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30259ac0-b395-4f88-9b6f-c397ea95590a" containerName="nova-metadata-metadata" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.571274 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="30259ac0-b395-4f88-9b6f-c397ea95590a" containerName="nova-metadata-metadata" Jan 23 12:15:20 crc kubenswrapper[4865]: E0123 12:15:20.571297 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30259ac0-b395-4f88-9b6f-c397ea95590a" containerName="nova-metadata-log" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.571303 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="30259ac0-b395-4f88-9b6f-c397ea95590a" containerName="nova-metadata-log" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.571509 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="30259ac0-b395-4f88-9b6f-c397ea95590a" containerName="nova-metadata-log" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.571521 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="30259ac0-b395-4f88-9b6f-c397ea95590a" containerName="nova-metadata-metadata" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.574998 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.578167 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.578400 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.585941 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.700640 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-logs\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.700829 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-config-data\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.701113 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkb8q\" (UniqueName: \"kubernetes.io/projected/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-kube-api-access-rkb8q\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.701152 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.701356 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.803534 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.803838 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-logs\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.803992 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-config-data\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.804240 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-logs\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.804274 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkb8q\" (UniqueName: \"kubernetes.io/projected/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-kube-api-access-rkb8q\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.804318 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.809167 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.822818 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-config-data\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.823868 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.826624 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkb8q\" (UniqueName: \"kubernetes.io/projected/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-kube-api-access-rkb8q\") pod \"nova-metadata-0\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " pod="openstack/nova-metadata-0" Jan 23 12:15:20 crc kubenswrapper[4865]: I0123 12:15:20.912939 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:15:21 crc kubenswrapper[4865]: W0123 12:15:21.361195 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f0c4ac3_a57c_4452_9619_acd5dfbdece1.slice/crio-47533bc99ad0b794c0dbaff88135059c87d18cf6ece2e853ab58cab216db9be4 WatchSource:0}: Error finding container 47533bc99ad0b794c0dbaff88135059c87d18cf6ece2e853ab58cab216db9be4: Status 404 returned error can't find the container with id 47533bc99ad0b794c0dbaff88135059c87d18cf6ece2e853ab58cab216db9be4 Jan 23 12:15:21 crc kubenswrapper[4865]: I0123 12:15:21.374372 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:21 crc kubenswrapper[4865]: I0123 12:15:21.528196 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f0c4ac3-a57c-4452-9619-acd5dfbdece1","Type":"ContainerStarted","Data":"47533bc99ad0b794c0dbaff88135059c87d18cf6ece2e853ab58cab216db9be4"} Jan 23 12:15:21 crc kubenswrapper[4865]: I0123 12:15:21.745879 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 12:15:21 crc kubenswrapper[4865]: I0123 12:15:21.745915 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 12:15:21 crc kubenswrapper[4865]: I0123 12:15:21.746894 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.090717 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.090877 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.131547 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30259ac0-b395-4f88-9b6f-c397ea95590a" path="/var/lib/kubelet/pods/30259ac0-b395-4f88-9b6f-c397ea95590a/volumes" Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.142749 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.425584 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.503582 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8bb9b79-km2l6"] Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.503866 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" podUID="dcc29799-616d-44f6-8cee-0518f590df2e" containerName="dnsmasq-dns" containerID="cri-o://0ac463c061810ecb03feb2411a902e25148ea04e2d567810ca46c09875ee333d" gracePeriod=10 Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.551445 4865 generic.go:334] "Generic (PLEG): container finished" podID="168185de-de1c-45e3-9a69-9f2145bc2371" containerID="3d4a7bc25f2ef511d838766fef7620ba7f9a741899bc2fab40b6e69dcc0773ba" exitCode=137 Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.551506 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"168185de-de1c-45e3-9a69-9f2145bc2371","Type":"ContainerDied","Data":"3d4a7bc25f2ef511d838766fef7620ba7f9a741899bc2fab40b6e69dcc0773ba"} Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.591701 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.827730 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:15:22 crc kubenswrapper[4865]: I0123 12:15:22.827993 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:15:23 crc kubenswrapper[4865]: I0123 12:15:23.561586 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f0c4ac3-a57c-4452-9619-acd5dfbdece1","Type":"ContainerStarted","Data":"35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085"} Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.222178 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" podUID="dcc29799-616d-44f6-8cee-0518f590df2e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: connect: connection refused" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.327232 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.401315 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-config-data\") pod \"168185de-de1c-45e3-9a69-9f2145bc2371\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.401379 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-scripts\") pod \"168185de-de1c-45e3-9a69-9f2145bc2371\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.401408 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-combined-ca-bundle\") pod \"168185de-de1c-45e3-9a69-9f2145bc2371\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.401481 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv6mx\" (UniqueName: \"kubernetes.io/projected/168185de-de1c-45e3-9a69-9f2145bc2371-kube-api-access-dv6mx\") pod \"168185de-de1c-45e3-9a69-9f2145bc2371\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.401738 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-log-httpd\") pod \"168185de-de1c-45e3-9a69-9f2145bc2371\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.401763 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-sg-core-conf-yaml\") pod \"168185de-de1c-45e3-9a69-9f2145bc2371\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.401818 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-run-httpd\") pod \"168185de-de1c-45e3-9a69-9f2145bc2371\" (UID: \"168185de-de1c-45e3-9a69-9f2145bc2371\") " Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.402093 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "168185de-de1c-45e3-9a69-9f2145bc2371" (UID: "168185de-de1c-45e3-9a69-9f2145bc2371"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.402386 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "168185de-de1c-45e3-9a69-9f2145bc2371" (UID: "168185de-de1c-45e3-9a69-9f2145bc2371"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.402986 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.403000 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/168185de-de1c-45e3-9a69-9f2145bc2371-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.408839 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168185de-de1c-45e3-9a69-9f2145bc2371-kube-api-access-dv6mx" (OuterVolumeSpecName: "kube-api-access-dv6mx") pod "168185de-de1c-45e3-9a69-9f2145bc2371" (UID: "168185de-de1c-45e3-9a69-9f2145bc2371"). InnerVolumeSpecName "kube-api-access-dv6mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.408973 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-scripts" (OuterVolumeSpecName: "scripts") pod "168185de-de1c-45e3-9a69-9f2145bc2371" (UID: "168185de-de1c-45e3-9a69-9f2145bc2371"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.437801 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "168185de-de1c-45e3-9a69-9f2145bc2371" (UID: "168185de-de1c-45e3-9a69-9f2145bc2371"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.506488 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.506806 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.506870 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv6mx\" (UniqueName: \"kubernetes.io/projected/168185de-de1c-45e3-9a69-9f2145bc2371-kube-api-access-dv6mx\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.521737 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "168185de-de1c-45e3-9a69-9f2145bc2371" (UID: "168185de-de1c-45e3-9a69-9f2145bc2371"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.548163 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-config-data" (OuterVolumeSpecName: "config-data") pod "168185de-de1c-45e3-9a69-9f2145bc2371" (UID: "168185de-de1c-45e3-9a69-9f2145bc2371"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.580066 4865 generic.go:334] "Generic (PLEG): container finished" podID="5c09d8bf-3db6-47b0-b099-fe6be61d003f" containerID="ce27f782c75c978fbeed8ac2146424543175e584ca44631991f76da6b731d27c" exitCode=0 Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.580137 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d9nb4" event={"ID":"5c09d8bf-3db6-47b0-b099-fe6be61d003f","Type":"ContainerDied","Data":"ce27f782c75c978fbeed8ac2146424543175e584ca44631991f76da6b731d27c"} Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.584332 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"168185de-de1c-45e3-9a69-9f2145bc2371","Type":"ContainerDied","Data":"923b6e9e565fdb988a2f8d990e1e86c2e19508cf72c2a3e42cae87c10c23d95d"} Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.584375 4865 scope.go:117] "RemoveContainer" containerID="3d4a7bc25f2ef511d838766fef7620ba7f9a741899bc2fab40b6e69dcc0773ba" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.584522 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.590569 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f0c4ac3-a57c-4452-9619-acd5dfbdece1","Type":"ContainerStarted","Data":"187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228"} Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.593725 4865 generic.go:334] "Generic (PLEG): container finished" podID="dcc29799-616d-44f6-8cee-0518f590df2e" containerID="0ac463c061810ecb03feb2411a902e25148ea04e2d567810ca46c09875ee333d" exitCode=0 Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.594197 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" event={"ID":"dcc29799-616d-44f6-8cee-0518f590df2e","Type":"ContainerDied","Data":"0ac463c061810ecb03feb2411a902e25148ea04e2d567810ca46c09875ee333d"} Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.608764 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.608804 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/168185de-de1c-45e3-9a69-9f2145bc2371-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.619747 4865 scope.go:117] "RemoveContainer" containerID="2fbf48865b8f64c9eb9cc0bcf5d39659dadb1539b29971cbb0e954093b66ac10" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.635992 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.6359693669999995 podStartE2EDuration="4.635969367s" podCreationTimestamp="2026-01-23 12:15:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:24.627321139 +0000 UTC m=+1368.796393365" watchObservedRunningTime="2026-01-23 12:15:24.635969367 +0000 UTC m=+1368.805041613" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.666696 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.668241 4865 scope.go:117] "RemoveContainer" containerID="7bd8a803a25e2647c57845ebc2b7714d2c3e5d41514dc833807fd13b1f1738eb" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.675094 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.690039 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:15:24 crc kubenswrapper[4865]: E0123 12:15:24.690527 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="proxy-httpd" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.690551 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="proxy-httpd" Jan 23 12:15:24 crc kubenswrapper[4865]: E0123 12:15:24.690568 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="sg-core" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.690576 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="sg-core" Jan 23 12:15:24 crc kubenswrapper[4865]: E0123 12:15:24.690594 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="ceilometer-notification-agent" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.690620 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="ceilometer-notification-agent" Jan 23 12:15:24 crc kubenswrapper[4865]: E0123 12:15:24.690635 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="ceilometer-central-agent" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.690644 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="ceilometer-central-agent" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.690855 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="proxy-httpd" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.690876 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="ceilometer-notification-agent" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.690890 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="sg-core" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.690905 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" containerName="ceilometer-central-agent" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.722944 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.728445 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.732801 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.768868 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.817750 4865 scope.go:117] "RemoveContainer" containerID="e721e28826127b009e280713d649c0204f4eff3a98ae684545cf60e0aa47a9aa" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.840461 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-log-httpd\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.840542 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-run-httpd\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.840580 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhlc\" (UniqueName: \"kubernetes.io/projected/88e2777c-1c3a-4972-981e-7560fd820f7b-kube-api-access-5nhlc\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.841467 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-config-data\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.841690 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.841802 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.841854 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-scripts\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.888957 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.946802 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-log-httpd\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.946861 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-run-httpd\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.947970 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-log-httpd\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.948208 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-run-httpd\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.948241 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhlc\" (UniqueName: \"kubernetes.io/projected/88e2777c-1c3a-4972-981e-7560fd820f7b-kube-api-access-5nhlc\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.948261 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-config-data\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.948318 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.948343 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.948363 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-scripts\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.954111 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-config-data\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.957193 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.963792 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.964066 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-scripts\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:24 crc kubenswrapper[4865]: I0123 12:15:24.981187 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhlc\" (UniqueName: \"kubernetes.io/projected/88e2777c-1c3a-4972-981e-7560fd820f7b-kube-api-access-5nhlc\") pod \"ceilometer-0\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " pod="openstack/ceilometer-0" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.049424 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-swift-storage-0\") pod \"dcc29799-616d-44f6-8cee-0518f590df2e\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.049476 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-sb\") pod \"dcc29799-616d-44f6-8cee-0518f590df2e\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.049579 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-svc\") pod \"dcc29799-616d-44f6-8cee-0518f590df2e\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.049646 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-config\") pod \"dcc29799-616d-44f6-8cee-0518f590df2e\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.049723 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-nb\") pod \"dcc29799-616d-44f6-8cee-0518f590df2e\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.049783 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b52wl\" (UniqueName: \"kubernetes.io/projected/dcc29799-616d-44f6-8cee-0518f590df2e-kube-api-access-b52wl\") pod \"dcc29799-616d-44f6-8cee-0518f590df2e\" (UID: \"dcc29799-616d-44f6-8cee-0518f590df2e\") " Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.053095 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcc29799-616d-44f6-8cee-0518f590df2e-kube-api-access-b52wl" (OuterVolumeSpecName: "kube-api-access-b52wl") pod "dcc29799-616d-44f6-8cee-0518f590df2e" (UID: "dcc29799-616d-44f6-8cee-0518f590df2e"). InnerVolumeSpecName "kube-api-access-b52wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.129778 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.142422 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dcc29799-616d-44f6-8cee-0518f590df2e" (UID: "dcc29799-616d-44f6-8cee-0518f590df2e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.143718 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dcc29799-616d-44f6-8cee-0518f590df2e" (UID: "dcc29799-616d-44f6-8cee-0518f590df2e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.156043 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b52wl\" (UniqueName: \"kubernetes.io/projected/dcc29799-616d-44f6-8cee-0518f590df2e-kube-api-access-b52wl\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.156286 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.156366 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.164019 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dcc29799-616d-44f6-8cee-0518f590df2e" (UID: "dcc29799-616d-44f6-8cee-0518f590df2e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.175002 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dcc29799-616d-44f6-8cee-0518f590df2e" (UID: "dcc29799-616d-44f6-8cee-0518f590df2e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.188737 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-config" (OuterVolumeSpecName: "config") pod "dcc29799-616d-44f6-8cee-0518f590df2e" (UID: "dcc29799-616d-44f6-8cee-0518f590df2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.257729 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.257956 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.257965 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dcc29799-616d-44f6-8cee-0518f590df2e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.610580 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" event={"ID":"dcc29799-616d-44f6-8cee-0518f590df2e","Type":"ContainerDied","Data":"2e37bf5305e6199093e4cba5f0d0de2b19527ad5d8783ea495acf27ef8c916b5"} Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.610651 4865 scope.go:117] "RemoveContainer" containerID="0ac463c061810ecb03feb2411a902e25148ea04e2d567810ca46c09875ee333d" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.610758 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8bb9b79-km2l6" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.617422 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:15:25 crc kubenswrapper[4865]: W0123 12:15:25.627204 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88e2777c_1c3a_4972_981e_7560fd820f7b.slice/crio-2c4ca15e1667776ce2ace712ff0d644389a6317c7023cfd99bba6db7126d2ddd WatchSource:0}: Error finding container 2c4ca15e1667776ce2ace712ff0d644389a6317c7023cfd99bba6db7126d2ddd: Status 404 returned error can't find the container with id 2c4ca15e1667776ce2ace712ff0d644389a6317c7023cfd99bba6db7126d2ddd Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.649278 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8bb9b79-km2l6"] Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.659693 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8bb9b79-km2l6"] Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.660825 4865 scope.go:117] "RemoveContainer" containerID="c0ad45a1201dd30467a79e3d63b687284caa67af648a7a6e5b67eafaf1974870" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.881990 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.915642 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.915760 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.977183 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-config-data\") pod \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.977270 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-scripts\") pod \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.977299 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm972\" (UniqueName: \"kubernetes.io/projected/5c09d8bf-3db6-47b0-b099-fe6be61d003f-kube-api-access-tm972\") pod \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.977352 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-combined-ca-bundle\") pod \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\" (UID: \"5c09d8bf-3db6-47b0-b099-fe6be61d003f\") " Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.983573 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c09d8bf-3db6-47b0-b099-fe6be61d003f-kube-api-access-tm972" (OuterVolumeSpecName: "kube-api-access-tm972") pod "5c09d8bf-3db6-47b0-b099-fe6be61d003f" (UID: "5c09d8bf-3db6-47b0-b099-fe6be61d003f"). InnerVolumeSpecName "kube-api-access-tm972". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:25 crc kubenswrapper[4865]: I0123 12:15:25.991235 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-scripts" (OuterVolumeSpecName: "scripts") pod "5c09d8bf-3db6-47b0-b099-fe6be61d003f" (UID: "5c09d8bf-3db6-47b0-b099-fe6be61d003f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.023706 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c09d8bf-3db6-47b0-b099-fe6be61d003f" (UID: "5c09d8bf-3db6-47b0-b099-fe6be61d003f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.038385 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-config-data" (OuterVolumeSpecName: "config-data") pod "5c09d8bf-3db6-47b0-b099-fe6be61d003f" (UID: "5c09d8bf-3db6-47b0-b099-fe6be61d003f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.079775 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.080108 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.080130 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm972\" (UniqueName: \"kubernetes.io/projected/5c09d8bf-3db6-47b0-b099-fe6be61d003f-kube-api-access-tm972\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.080140 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c09d8bf-3db6-47b0-b099-fe6be61d003f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.138049 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168185de-de1c-45e3-9a69-9f2145bc2371" path="/var/lib/kubelet/pods/168185de-de1c-45e3-9a69-9f2145bc2371/volumes" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.139704 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcc29799-616d-44f6-8cee-0518f590df2e" path="/var/lib/kubelet/pods/dcc29799-616d-44f6-8cee-0518f590df2e/volumes" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.639947 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-d9nb4" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.640259 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-d9nb4" event={"ID":"5c09d8bf-3db6-47b0-b099-fe6be61d003f","Type":"ContainerDied","Data":"f53fcd18baaa2401d876c06701e9436c50a57ad9a633b9c886d65761cea3f920"} Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.640357 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f53fcd18baaa2401d876c06701e9436c50a57ad9a633b9c886d65761cea3f920" Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.649911 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e2777c-1c3a-4972-981e-7560fd820f7b","Type":"ContainerStarted","Data":"920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510"} Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.649974 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e2777c-1c3a-4972-981e-7560fd820f7b","Type":"ContainerStarted","Data":"f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174"} Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.649987 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e2777c-1c3a-4972-981e-7560fd820f7b","Type":"ContainerStarted","Data":"2c4ca15e1667776ce2ace712ff0d644389a6317c7023cfd99bba6db7126d2ddd"} Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.791422 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.791919 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerName="nova-api-log" containerID="cri-o://db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811" gracePeriod=30 Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.792021 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerName="nova-api-api" containerID="cri-o://6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e" gracePeriod=30 Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.815206 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.815409 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c731b4c1-d753-46ea-81cd-b49a3ed9afb3" containerName="nova-scheduler-scheduler" containerID="cri-o://43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900" gracePeriod=30 Jan 23 12:15:26 crc kubenswrapper[4865]: I0123 12:15:26.922229 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:27 crc kubenswrapper[4865]: E0123 12:15:27.092040 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 12:15:27 crc kubenswrapper[4865]: E0123 12:15:27.093549 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 12:15:27 crc kubenswrapper[4865]: E0123 12:15:27.094937 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 12:15:27 crc kubenswrapper[4865]: E0123 12:15:27.094978 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c731b4c1-d753-46ea-81cd-b49a3ed9afb3" containerName="nova-scheduler-scheduler" Jan 23 12:15:27 crc kubenswrapper[4865]: I0123 12:15:27.661045 4865 generic.go:334] "Generic (PLEG): container finished" podID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerID="db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811" exitCode=143 Jan 23 12:15:27 crc kubenswrapper[4865]: I0123 12:15:27.661134 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5dee3d-cfa7-4474-a3e8-83c6b956636f","Type":"ContainerDied","Data":"db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811"} Jan 23 12:15:27 crc kubenswrapper[4865]: I0123 12:15:27.683452 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e2777c-1c3a-4972-981e-7560fd820f7b","Type":"ContainerStarted","Data":"4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15"} Jan 23 12:15:28 crc kubenswrapper[4865]: I0123 12:15:28.696078 4865 generic.go:334] "Generic (PLEG): container finished" podID="88f42439-e9e1-4de1-93fa-22a56502e805" containerID="1be52bc48bdbefd252086db62ae062d61449059896de0d866760ad8f508b4b9a" exitCode=0 Jan 23 12:15:28 crc kubenswrapper[4865]: I0123 12:15:28.696172 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zd6bk" event={"ID":"88f42439-e9e1-4de1-93fa-22a56502e805","Type":"ContainerDied","Data":"1be52bc48bdbefd252086db62ae062d61449059896de0d866760ad8f508b4b9a"} Jan 23 12:15:28 crc kubenswrapper[4865]: I0123 12:15:28.701353 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e2777c-1c3a-4972-981e-7560fd820f7b","Type":"ContainerStarted","Data":"bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250"} Jan 23 12:15:28 crc kubenswrapper[4865]: I0123 12:15:28.701522 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 12:15:28 crc kubenswrapper[4865]: I0123 12:15:28.701800 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" containerName="nova-metadata-log" containerID="cri-o://35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085" gracePeriod=30 Jan 23 12:15:28 crc kubenswrapper[4865]: I0123 12:15:28.701995 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" containerName="nova-metadata-metadata" containerID="cri-o://187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228" gracePeriod=30 Jan 23 12:15:28 crc kubenswrapper[4865]: I0123 12:15:28.740005 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.209994817 podStartE2EDuration="4.739989944s" podCreationTimestamp="2026-01-23 12:15:24 +0000 UTC" firstStartedPulling="2026-01-23 12:15:25.641565739 +0000 UTC m=+1369.810637965" lastFinishedPulling="2026-01-23 12:15:28.171560876 +0000 UTC m=+1372.340633092" observedRunningTime="2026-01-23 12:15:28.736801538 +0000 UTC m=+1372.905873774" watchObservedRunningTime="2026-01-23 12:15:28.739989944 +0000 UTC m=+1372.909062170" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.416739 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.530228 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.556337 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-config-data\") pod \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.556655 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-combined-ca-bundle\") pod \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.556677 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkb8q\" (UniqueName: \"kubernetes.io/projected/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-kube-api-access-rkb8q\") pod \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.556724 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-nova-metadata-tls-certs\") pod \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.556775 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-logs\") pod \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\" (UID: \"9f0c4ac3-a57c-4452-9619-acd5dfbdece1\") " Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.557436 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-logs" (OuterVolumeSpecName: "logs") pod "9f0c4ac3-a57c-4452-9619-acd5dfbdece1" (UID: "9f0c4ac3-a57c-4452-9619-acd5dfbdece1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.571013 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-kube-api-access-rkb8q" (OuterVolumeSpecName: "kube-api-access-rkb8q") pod "9f0c4ac3-a57c-4452-9619-acd5dfbdece1" (UID: "9f0c4ac3-a57c-4452-9619-acd5dfbdece1"). InnerVolumeSpecName "kube-api-access-rkb8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.593759 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-config-data" (OuterVolumeSpecName: "config-data") pod "9f0c4ac3-a57c-4452-9619-acd5dfbdece1" (UID: "9f0c4ac3-a57c-4452-9619-acd5dfbdece1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.616824 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f0c4ac3-a57c-4452-9619-acd5dfbdece1" (UID: "9f0c4ac3-a57c-4452-9619-acd5dfbdece1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.620474 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9f0c4ac3-a57c-4452-9619-acd5dfbdece1" (UID: "9f0c4ac3-a57c-4452-9619-acd5dfbdece1"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.658126 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-combined-ca-bundle\") pod \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.658249 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-config-data\") pod \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.658308 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnm27\" (UniqueName: \"kubernetes.io/projected/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-kube-api-access-hnm27\") pod \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\" (UID: \"c731b4c1-d753-46ea-81cd-b49a3ed9afb3\") " Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.658803 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.658822 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.658835 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkb8q\" (UniqueName: \"kubernetes.io/projected/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-kube-api-access-rkb8q\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.658847 4865 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.658860 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f0c4ac3-a57c-4452-9619-acd5dfbdece1-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.661477 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-kube-api-access-hnm27" (OuterVolumeSpecName: "kube-api-access-hnm27") pod "c731b4c1-d753-46ea-81cd-b49a3ed9afb3" (UID: "c731b4c1-d753-46ea-81cd-b49a3ed9afb3"). InnerVolumeSpecName "kube-api-access-hnm27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.687105 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-config-data" (OuterVolumeSpecName: "config-data") pod "c731b4c1-d753-46ea-81cd-b49a3ed9afb3" (UID: "c731b4c1-d753-46ea-81cd-b49a3ed9afb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.687787 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c731b4c1-d753-46ea-81cd-b49a3ed9afb3" (UID: "c731b4c1-d753-46ea-81cd-b49a3ed9afb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.712528 4865 generic.go:334] "Generic (PLEG): container finished" podID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" containerID="187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228" exitCode=0 Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.713546 4865 generic.go:334] "Generic (PLEG): container finished" podID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" containerID="35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085" exitCode=143 Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.713505 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f0c4ac3-a57c-4452-9619-acd5dfbdece1","Type":"ContainerDied","Data":"187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228"} Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.713860 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f0c4ac3-a57c-4452-9619-acd5dfbdece1","Type":"ContainerDied","Data":"35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085"} Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.713959 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f0c4ac3-a57c-4452-9619-acd5dfbdece1","Type":"ContainerDied","Data":"47533bc99ad0b794c0dbaff88135059c87d18cf6ece2e853ab58cab216db9be4"} Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.713829 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.714049 4865 scope.go:117] "RemoveContainer" containerID="187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.716308 4865 generic.go:334] "Generic (PLEG): container finished" podID="c731b4c1-d753-46ea-81cd-b49a3ed9afb3" containerID="43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900" exitCode=0 Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.716503 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.716514 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c731b4c1-d753-46ea-81cd-b49a3ed9afb3","Type":"ContainerDied","Data":"43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900"} Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.716920 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c731b4c1-d753-46ea-81cd-b49a3ed9afb3","Type":"ContainerDied","Data":"6aa2a6482202a7943216a2ff8ce877540857ce2c4f7418342f4ab0b9b1607bdf"} Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.759244 4865 scope.go:117] "RemoveContainer" containerID="35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.759999 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.760023 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnm27\" (UniqueName: \"kubernetes.io/projected/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-kube-api-access-hnm27\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.760032 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c731b4c1-d753-46ea-81cd-b49a3ed9afb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.784760 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.795558 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.805465 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.816155 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.843984 4865 scope.go:117] "RemoveContainer" containerID="187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228" Jan 23 12:15:29 crc kubenswrapper[4865]: E0123 12:15:29.844634 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228\": container with ID starting with 187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228 not found: ID does not exist" containerID="187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.844699 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228"} err="failed to get container status \"187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228\": rpc error: code = NotFound desc = could not find container \"187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228\": container with ID starting with 187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228 not found: ID does not exist" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.844757 4865 scope.go:117] "RemoveContainer" containerID="35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085" Jan 23 12:15:29 crc kubenswrapper[4865]: E0123 12:15:29.845057 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085\": container with ID starting with 35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085 not found: ID does not exist" containerID="35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.845097 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085"} err="failed to get container status \"35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085\": rpc error: code = NotFound desc = could not find container \"35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085\": container with ID starting with 35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085 not found: ID does not exist" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.845115 4865 scope.go:117] "RemoveContainer" containerID="187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.845467 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228"} err="failed to get container status \"187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228\": rpc error: code = NotFound desc = could not find container \"187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228\": container with ID starting with 187a5217e8bdd867be4f5c449b3832ce3e48d49c8e88e3e7e3205b941fccc228 not found: ID does not exist" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.845491 4865 scope.go:117] "RemoveContainer" containerID="35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.845869 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085"} err="failed to get container status \"35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085\": rpc error: code = NotFound desc = could not find container \"35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085\": container with ID starting with 35d8fdfa70a3f6bead12a3e1eba077e8609b360578c986d2bf99d3629c5ba085 not found: ID does not exist" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.845933 4865 scope.go:117] "RemoveContainer" containerID="43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.853808 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:29 crc kubenswrapper[4865]: E0123 12:15:29.854265 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" containerName="nova-metadata-log" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.854283 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" containerName="nova-metadata-log" Jan 23 12:15:29 crc kubenswrapper[4865]: E0123 12:15:29.854295 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" containerName="nova-metadata-metadata" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.854302 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" containerName="nova-metadata-metadata" Jan 23 12:15:29 crc kubenswrapper[4865]: E0123 12:15:29.854311 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc29799-616d-44f6-8cee-0518f590df2e" containerName="init" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.854318 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc29799-616d-44f6-8cee-0518f590df2e" containerName="init" Jan 23 12:15:29 crc kubenswrapper[4865]: E0123 12:15:29.854339 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c09d8bf-3db6-47b0-b099-fe6be61d003f" containerName="nova-manage" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.854345 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c09d8bf-3db6-47b0-b099-fe6be61d003f" containerName="nova-manage" Jan 23 12:15:29 crc kubenswrapper[4865]: E0123 12:15:29.854358 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c731b4c1-d753-46ea-81cd-b49a3ed9afb3" containerName="nova-scheduler-scheduler" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.854364 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c731b4c1-d753-46ea-81cd-b49a3ed9afb3" containerName="nova-scheduler-scheduler" Jan 23 12:15:29 crc kubenswrapper[4865]: E0123 12:15:29.854382 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcc29799-616d-44f6-8cee-0518f590df2e" containerName="dnsmasq-dns" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.854387 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcc29799-616d-44f6-8cee-0518f590df2e" containerName="dnsmasq-dns" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.856658 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" containerName="nova-metadata-log" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.856703 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" containerName="nova-metadata-metadata" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.856731 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcc29799-616d-44f6-8cee-0518f590df2e" containerName="dnsmasq-dns" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.856742 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c09d8bf-3db6-47b0-b099-fe6be61d003f" containerName="nova-manage" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.856762 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c731b4c1-d753-46ea-81cd-b49a3ed9afb3" containerName="nova-scheduler-scheduler" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.857729 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.865541 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.866088 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.867833 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-config-data\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.867939 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xj8s\" (UniqueName: \"kubernetes.io/projected/c4b1b266-e09f-40b7-800d-d95db2ad4632-kube-api-access-4xj8s\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.867994 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.877953 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4b1b266-e09f-40b7-800d-d95db2ad4632-logs\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.878282 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.886836 4865 scope.go:117] "RemoveContainer" containerID="43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.908868 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.910036 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.912711 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 12:15:29 crc kubenswrapper[4865]: E0123 12:15:29.915998 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900\": container with ID starting with 43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900 not found: ID does not exist" containerID="43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.916045 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900"} err="failed to get container status \"43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900\": rpc error: code = NotFound desc = could not find container \"43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900\": container with ID starting with 43af7774a7295b85041322050e66a8db67a0d1f0b2389433a55e7655ff5ff900 not found: ID does not exist" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.921692 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.938022 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.979820 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-config-data\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.980187 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xj8s\" (UniqueName: \"kubernetes.io/projected/c4b1b266-e09f-40b7-800d-d95db2ad4632-kube-api-access-4xj8s\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.980350 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.980705 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4b1b266-e09f-40b7-800d-d95db2ad4632-logs\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.980842 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsl2t\" (UniqueName: \"kubernetes.io/projected/b550b10b-74fe-4f27-92c8-011dd04e87e0-kube-api-access-jsl2t\") pod \"nova-scheduler-0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.980987 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-config-data\") pod \"nova-scheduler-0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.981035 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4b1b266-e09f-40b7-800d-d95db2ad4632-logs\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.981125 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.981246 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.984981 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.989259 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-config-data\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:29 crc kubenswrapper[4865]: I0123 12:15:29.995074 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.019292 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xj8s\" (UniqueName: \"kubernetes.io/projected/c4b1b266-e09f-40b7-800d-d95db2ad4632-kube-api-access-4xj8s\") pod \"nova-metadata-0\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " pod="openstack/nova-metadata-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.083240 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsl2t\" (UniqueName: \"kubernetes.io/projected/b550b10b-74fe-4f27-92c8-011dd04e87e0-kube-api-access-jsl2t\") pod \"nova-scheduler-0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.083285 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-config-data\") pod \"nova-scheduler-0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.083340 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.087129 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-config-data\") pod \"nova-scheduler-0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.088457 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.101524 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsl2t\" (UniqueName: \"kubernetes.io/projected/b550b10b-74fe-4f27-92c8-011dd04e87e0-kube-api-access-jsl2t\") pod \"nova-scheduler-0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " pod="openstack/nova-scheduler-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.134949 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f0c4ac3-a57c-4452-9619-acd5dfbdece1" path="/var/lib/kubelet/pods/9f0c4ac3-a57c-4452-9619-acd5dfbdece1/volumes" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.135845 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c731b4c1-d753-46ea-81cd-b49a3ed9afb3" path="/var/lib/kubelet/pods/c731b4c1-d753-46ea-81cd-b49a3ed9afb3/volumes" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.142441 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.184322 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-scripts\") pod \"88f42439-e9e1-4de1-93fa-22a56502e805\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.184516 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wdqp\" (UniqueName: \"kubernetes.io/projected/88f42439-e9e1-4de1-93fa-22a56502e805-kube-api-access-8wdqp\") pod \"88f42439-e9e1-4de1-93fa-22a56502e805\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.184636 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-combined-ca-bundle\") pod \"88f42439-e9e1-4de1-93fa-22a56502e805\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.184683 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-config-data\") pod \"88f42439-e9e1-4de1-93fa-22a56502e805\" (UID: \"88f42439-e9e1-4de1-93fa-22a56502e805\") " Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.189026 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.190290 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f42439-e9e1-4de1-93fa-22a56502e805-kube-api-access-8wdqp" (OuterVolumeSpecName: "kube-api-access-8wdqp") pod "88f42439-e9e1-4de1-93fa-22a56502e805" (UID: "88f42439-e9e1-4de1-93fa-22a56502e805"). InnerVolumeSpecName "kube-api-access-8wdqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.190557 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-scripts" (OuterVolumeSpecName: "scripts") pod "88f42439-e9e1-4de1-93fa-22a56502e805" (UID: "88f42439-e9e1-4de1-93fa-22a56502e805"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.216713 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88f42439-e9e1-4de1-93fa-22a56502e805" (UID: "88f42439-e9e1-4de1-93fa-22a56502e805"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.224646 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-config-data" (OuterVolumeSpecName: "config-data") pod "88f42439-e9e1-4de1-93fa-22a56502e805" (UID: "88f42439-e9e1-4de1-93fa-22a56502e805"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.245546 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.286385 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wdqp\" (UniqueName: \"kubernetes.io/projected/88f42439-e9e1-4de1-93fa-22a56502e805-kube-api-access-8wdqp\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.286426 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.286439 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.286451 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f42439-e9e1-4de1-93fa-22a56502e805-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.680542 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.764553 4865 generic.go:334] "Generic (PLEG): container finished" podID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerID="6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e" exitCode=0 Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.764663 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5dee3d-cfa7-4474-a3e8-83c6b956636f","Type":"ContainerDied","Data":"6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e"} Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.764697 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0f5dee3d-cfa7-4474-a3e8-83c6b956636f","Type":"ContainerDied","Data":"2579acd22369c73176045d5bfbbc4e12292b5ad18b7fbef10789c5974b8b4062"} Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.764719 4865 scope.go:117] "RemoveContainer" containerID="6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.764891 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.770497 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zd6bk" event={"ID":"88f42439-e9e1-4de1-93fa-22a56502e805","Type":"ContainerDied","Data":"90a695934eabed59fbcc187627323e7c39f00cd723a5d03204afef8fc915dd43"} Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.770556 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a695934eabed59fbcc187627323e7c39f00cd723a5d03204afef8fc915dd43" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.770676 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zd6bk" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.797189 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-config-data\") pod \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.797244 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-combined-ca-bundle\") pod \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.797373 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-logs\") pod \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.797413 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2flf\" (UniqueName: \"kubernetes.io/projected/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-kube-api-access-k2flf\") pod \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\" (UID: \"0f5dee3d-cfa7-4474-a3e8-83c6b956636f\") " Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.798356 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-logs" (OuterVolumeSpecName: "logs") pod "0f5dee3d-cfa7-4474-a3e8-83c6b956636f" (UID: "0f5dee3d-cfa7-4474-a3e8-83c6b956636f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.804258 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.808188 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 12:15:30 crc kubenswrapper[4865]: E0123 12:15:30.808912 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f42439-e9e1-4de1-93fa-22a56502e805" containerName="nova-cell1-conductor-db-sync" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.808927 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f42439-e9e1-4de1-93fa-22a56502e805" containerName="nova-cell1-conductor-db-sync" Jan 23 12:15:30 crc kubenswrapper[4865]: E0123 12:15:30.808969 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerName="nova-api-log" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.808976 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerName="nova-api-log" Jan 23 12:15:30 crc kubenswrapper[4865]: E0123 12:15:30.809045 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerName="nova-api-api" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.809052 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerName="nova-api-api" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.809478 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerName="nova-api-api" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.809515 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" containerName="nova-api-log" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.809539 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f42439-e9e1-4de1-93fa-22a56502e805" containerName="nova-cell1-conductor-db-sync" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.810225 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.823272 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.832024 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-kube-api-access-k2flf" (OuterVolumeSpecName: "kube-api-access-k2flf") pod "0f5dee3d-cfa7-4474-a3e8-83c6b956636f" (UID: "0f5dee3d-cfa7-4474-a3e8-83c6b956636f"). InnerVolumeSpecName "kube-api-access-k2flf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.856963 4865 scope.go:117] "RemoveContainer" containerID="db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.867907 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 12:15:30 crc kubenswrapper[4865]: W0123 12:15:30.872698 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4b1b266_e09f_40b7_800d_d95db2ad4632.slice/crio-8ce00c02134c370a45d22e3dd86da5bf1ac2e089ab8c15f166ca368140574bd7 WatchSource:0}: Error finding container 8ce00c02134c370a45d22e3dd86da5bf1ac2e089ab8c15f166ca368140574bd7: Status 404 returned error can't find the container with id 8ce00c02134c370a45d22e3dd86da5bf1ac2e089ab8c15f166ca368140574bd7 Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.897033 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.899763 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-config-data" (OuterVolumeSpecName: "config-data") pod "0f5dee3d-cfa7-4474-a3e8-83c6b956636f" (UID: "0f5dee3d-cfa7-4474-a3e8-83c6b956636f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.902252 4865 scope.go:117] "RemoveContainer" containerID="6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e" Jan 23 12:15:30 crc kubenswrapper[4865]: E0123 12:15:30.905814 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e\": container with ID starting with 6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e not found: ID does not exist" containerID="6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.905854 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a8c9b2e-3d78-440e-aa9d-5e5e0f414573-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573\") " pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.905853 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e"} err="failed to get container status \"6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e\": rpc error: code = NotFound desc = could not find container \"6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e\": container with ID starting with 6410f6ca636e5a7065736438eaa2e18c7c6ec61afbaf3c54b7cf48b69d76100e not found: ID does not exist" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.905902 4865 scope.go:117] "RemoveContainer" containerID="db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.905957 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8c9b2e-3d78-440e-aa9d-5e5e0f414573-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573\") " pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.906000 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22r5\" (UniqueName: \"kubernetes.io/projected/9a8c9b2e-3d78-440e-aa9d-5e5e0f414573-kube-api-access-z22r5\") pod \"nova-cell1-conductor-0\" (UID: \"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573\") " pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.906075 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2flf\" (UniqueName: \"kubernetes.io/projected/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-kube-api-access-k2flf\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.906086 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:30 crc kubenswrapper[4865]: E0123 12:15:30.907742 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811\": container with ID starting with db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811 not found: ID does not exist" containerID="db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.907785 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811"} err="failed to get container status \"db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811\": rpc error: code = NotFound desc = could not find container \"db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811\": container with ID starting with db8a338502b651dde9a2163302d1e4cba81efffd9985e2d807791554be648811 not found: ID does not exist" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.935002 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f5dee3d-cfa7-4474-a3e8-83c6b956636f" (UID: "0f5dee3d-cfa7-4474-a3e8-83c6b956636f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:30 crc kubenswrapper[4865]: I0123 12:15:30.986790 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.007188 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8c9b2e-3d78-440e-aa9d-5e5e0f414573-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573\") " pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.008507 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z22r5\" (UniqueName: \"kubernetes.io/projected/9a8c9b2e-3d78-440e-aa9d-5e5e0f414573-kube-api-access-z22r5\") pod \"nova-cell1-conductor-0\" (UID: \"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573\") " pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.008902 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a8c9b2e-3d78-440e-aa9d-5e5e0f414573-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573\") " pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.009175 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f5dee3d-cfa7-4474-a3e8-83c6b956636f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.010017 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a8c9b2e-3d78-440e-aa9d-5e5e0f414573-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573\") " pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.014874 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a8c9b2e-3d78-440e-aa9d-5e5e0f414573-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573\") " pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.027488 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z22r5\" (UniqueName: \"kubernetes.io/projected/9a8c9b2e-3d78-440e-aa9d-5e5e0f414573-kube-api-access-z22r5\") pod \"nova-cell1-conductor-0\" (UID: \"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573\") " pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.134379 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.144803 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.161651 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.163304 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.166204 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.172434 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.192253 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.317113 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7rz2\" (UniqueName: \"kubernetes.io/projected/fb131867-ea3a-406e-adf4-6c2116ff3a5e-kube-api-access-k7rz2\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.317498 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-config-data\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.317566 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.317663 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb131867-ea3a-406e-adf4-6c2116ff3a5e-logs\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.419851 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-config-data\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.419928 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.419988 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb131867-ea3a-406e-adf4-6c2116ff3a5e-logs\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.420052 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7rz2\" (UniqueName: \"kubernetes.io/projected/fb131867-ea3a-406e-adf4-6c2116ff3a5e-kube-api-access-k7rz2\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.425011 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb131867-ea3a-406e-adf4-6c2116ff3a5e-logs\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.431219 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.433219 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-config-data\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.443189 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7rz2\" (UniqueName: \"kubernetes.io/projected/fb131867-ea3a-406e-adf4-6c2116ff3a5e-kube-api-access-k7rz2\") pod \"nova-api-0\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.637245 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.733321 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.808811 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b1b266-e09f-40b7-800d-d95db2ad4632","Type":"ContainerStarted","Data":"d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7"} Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.808856 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b1b266-e09f-40b7-800d-d95db2ad4632","Type":"ContainerStarted","Data":"6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b"} Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.808867 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b1b266-e09f-40b7-800d-d95db2ad4632","Type":"ContainerStarted","Data":"8ce00c02134c370a45d22e3dd86da5bf1ac2e089ab8c15f166ca368140574bd7"} Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.811288 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573","Type":"ContainerStarted","Data":"c00c309a8fcb0b24f6f9142217df322cdc8f216b3f256f7b92fa2484f50c96d9"} Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.812404 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b550b10b-74fe-4f27-92c8-011dd04e87e0","Type":"ContainerStarted","Data":"c26f067d3e2f7b3224813d43107ffc59c9cb10c26f18831dfb689b39ad0809f9"} Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.812430 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b550b10b-74fe-4f27-92c8-011dd04e87e0","Type":"ContainerStarted","Data":"befd31821ce44953ebf5a5d6b02e7a49c1e793a51fcb5aa140144fba40d917f5"} Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.840314 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.840293874 podStartE2EDuration="2.840293874s" podCreationTimestamp="2026-01-23 12:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:31.837840506 +0000 UTC m=+1376.006912752" watchObservedRunningTime="2026-01-23 12:15:31.840293874 +0000 UTC m=+1376.009366120" Jan 23 12:15:31 crc kubenswrapper[4865]: I0123 12:15:31.855498 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.85547694 podStartE2EDuration="2.85547694s" podCreationTimestamp="2026-01-23 12:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:31.854906247 +0000 UTC m=+1376.023978473" watchObservedRunningTime="2026-01-23 12:15:31.85547694 +0000 UTC m=+1376.024549166" Jan 23 12:15:32 crc kubenswrapper[4865]: I0123 12:15:32.130908 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f5dee3d-cfa7-4474-a3e8-83c6b956636f" path="/var/lib/kubelet/pods/0f5dee3d-cfa7-4474-a3e8-83c6b956636f/volumes" Jan 23 12:15:32 crc kubenswrapper[4865]: I0123 12:15:32.138095 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:32 crc kubenswrapper[4865]: W0123 12:15:32.149795 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb131867_ea3a_406e_adf4_6c2116ff3a5e.slice/crio-120a7b3eec3077aaf696e86b2fde1fa14d3768dda2e58ab6c9c6c504981034cb WatchSource:0}: Error finding container 120a7b3eec3077aaf696e86b2fde1fa14d3768dda2e58ab6c9c6c504981034cb: Status 404 returned error can't find the container with id 120a7b3eec3077aaf696e86b2fde1fa14d3768dda2e58ab6c9c6c504981034cb Jan 23 12:15:32 crc kubenswrapper[4865]: I0123 12:15:32.830649 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb131867-ea3a-406e-adf4-6c2116ff3a5e","Type":"ContainerStarted","Data":"fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e"} Jan 23 12:15:32 crc kubenswrapper[4865]: I0123 12:15:32.831146 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb131867-ea3a-406e-adf4-6c2116ff3a5e","Type":"ContainerStarted","Data":"d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999"} Jan 23 12:15:32 crc kubenswrapper[4865]: I0123 12:15:32.831162 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb131867-ea3a-406e-adf4-6c2116ff3a5e","Type":"ContainerStarted","Data":"120a7b3eec3077aaf696e86b2fde1fa14d3768dda2e58ab6c9c6c504981034cb"} Jan 23 12:15:32 crc kubenswrapper[4865]: I0123 12:15:32.836782 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9a8c9b2e-3d78-440e-aa9d-5e5e0f414573","Type":"ContainerStarted","Data":"7c1974f320119ec30cec40b4cd037805eb71493e29711490e6c5ebce67e214d5"} Jan 23 12:15:32 crc kubenswrapper[4865]: I0123 12:15:32.854929 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.854911905 podStartE2EDuration="1.854911905s" podCreationTimestamp="2026-01-23 12:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:32.848782177 +0000 UTC m=+1377.017854403" watchObservedRunningTime="2026-01-23 12:15:32.854911905 +0000 UTC m=+1377.023984131" Jan 23 12:15:33 crc kubenswrapper[4865]: I0123 12:15:33.845852 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:35 crc kubenswrapper[4865]: I0123 12:15:35.190482 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 12:15:35 crc kubenswrapper[4865]: I0123 12:15:35.190782 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 12:15:35 crc kubenswrapper[4865]: I0123 12:15:35.246542 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 12:15:40 crc kubenswrapper[4865]: I0123 12:15:40.189635 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 12:15:40 crc kubenswrapper[4865]: I0123 12:15:40.190979 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 12:15:40 crc kubenswrapper[4865]: I0123 12:15:40.247308 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 12:15:40 crc kubenswrapper[4865]: I0123 12:15:40.281765 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 12:15:40 crc kubenswrapper[4865]: I0123 12:15:40.306377 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=10.306362518 podStartE2EDuration="10.306362518s" podCreationTimestamp="2026-01-23 12:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:32.869982968 +0000 UTC m=+1377.039055184" watchObservedRunningTime="2026-01-23 12:15:40.306362518 +0000 UTC m=+1384.475434744" Jan 23 12:15:41 crc kubenswrapper[4865]: I0123 12:15:41.206110 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 23 12:15:41 crc kubenswrapper[4865]: I0123 12:15:41.207963 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:15:41 crc kubenswrapper[4865]: I0123 12:15:41.208098 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:15:41 crc kubenswrapper[4865]: I0123 12:15:41.250335 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 12:15:41 crc kubenswrapper[4865]: I0123 12:15:41.637644 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 12:15:41 crc kubenswrapper[4865]: I0123 12:15:41.637688 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 12:15:42 crc kubenswrapper[4865]: I0123 12:15:42.723013 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:15:42 crc kubenswrapper[4865]: I0123 12:15:42.723234 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:15:49 crc kubenswrapper[4865]: I0123 12:15:49.285874 4865 generic.go:334] "Generic (PLEG): container finished" podID="91128024-d938-4a6e-9c1d-b701f716a1e2" containerID="f0310b66599eef15e0ac46471c8223435f9763ab5144a99a3aec6e4ebdcc7180" exitCode=137 Jan 23 12:15:49 crc kubenswrapper[4865]: I0123 12:15:49.285949 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"91128024-d938-4a6e-9c1d-b701f716a1e2","Type":"ContainerDied","Data":"f0310b66599eef15e0ac46471c8223435f9763ab5144a99a3aec6e4ebdcc7180"} Jan 23 12:15:50 crc kubenswrapper[4865]: I0123 12:15:50.197933 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 12:15:50 crc kubenswrapper[4865]: I0123 12:15:50.198548 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 12:15:50 crc kubenswrapper[4865]: I0123 12:15:50.205004 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 12:15:50 crc kubenswrapper[4865]: I0123 12:15:50.301265 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:50.775354 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:50.894649 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-config-data\") pod \"91128024-d938-4a6e-9c1d-b701f716a1e2\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:50.895037 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjb49\" (UniqueName: \"kubernetes.io/projected/91128024-d938-4a6e-9c1d-b701f716a1e2-kube-api-access-fjb49\") pod \"91128024-d938-4a6e-9c1d-b701f716a1e2\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:50.895183 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-combined-ca-bundle\") pod \"91128024-d938-4a6e-9c1d-b701f716a1e2\" (UID: \"91128024-d938-4a6e-9c1d-b701f716a1e2\") " Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:50.902694 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91128024-d938-4a6e-9c1d-b701f716a1e2-kube-api-access-fjb49" (OuterVolumeSpecName: "kube-api-access-fjb49") pod "91128024-d938-4a6e-9c1d-b701f716a1e2" (UID: "91128024-d938-4a6e-9c1d-b701f716a1e2"). InnerVolumeSpecName "kube-api-access-fjb49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:50.936715 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91128024-d938-4a6e-9c1d-b701f716a1e2" (UID: "91128024-d938-4a6e-9c1d-b701f716a1e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:50.958116 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-config-data" (OuterVolumeSpecName: "config-data") pod "91128024-d938-4a6e-9c1d-b701f716a1e2" (UID: "91128024-d938-4a6e-9c1d-b701f716a1e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:50.997678 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:50.997721 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjb49\" (UniqueName: \"kubernetes.io/projected/91128024-d938-4a6e-9c1d-b701f716a1e2-kube-api-access-fjb49\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:50.997735 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91128024-d938-4a6e-9c1d-b701f716a1e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.305576 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.305577 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"91128024-d938-4a6e-9c1d-b701f716a1e2","Type":"ContainerDied","Data":"1670b1027db7d05cfa2f61709955491a75d5a1ce4c54d671ba7e76fbc7469c63"} Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.305943 4865 scope.go:117] "RemoveContainer" containerID="f0310b66599eef15e0ac46471c8223435f9763ab5144a99a3aec6e4ebdcc7180" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.352583 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.367563 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.382187 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 12:15:51 crc kubenswrapper[4865]: E0123 12:15:51.383056 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91128024-d938-4a6e-9c1d-b701f716a1e2" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.383090 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="91128024-d938-4a6e-9c1d-b701f716a1e2" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.383593 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="91128024-d938-4a6e-9c1d-b701f716a1e2" containerName="nova-cell1-novncproxy-novncproxy" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.384346 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.389063 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.389404 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.389566 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.424112 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.507501 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.507539 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.507582 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.507728 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.507866 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fssmb\" (UniqueName: \"kubernetes.io/projected/8060e11e-ba4c-4175-9866-e9d61f246492-kube-api-access-fssmb\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.609092 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.609134 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.609163 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.609218 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.609299 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fssmb\" (UniqueName: \"kubernetes.io/projected/8060e11e-ba4c-4175-9866-e9d61f246492-kube-api-access-fssmb\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.614985 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.614993 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.618201 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.619021 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8060e11e-ba4c-4175-9866-e9d61f246492-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.627070 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fssmb\" (UniqueName: \"kubernetes.io/projected/8060e11e-ba4c-4175-9866-e9d61f246492-kube-api-access-fssmb\") pod \"nova-cell1-novncproxy-0\" (UID: \"8060e11e-ba4c-4175-9866-e9d61f246492\") " pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.641135 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.641884 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.643473 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.647075 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 12:15:51 crc kubenswrapper[4865]: I0123 12:15:51.732081 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.128326 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91128024-d938-4a6e-9c1d-b701f716a1e2" path="/var/lib/kubelet/pods/91128024-d938-4a6e-9c1d-b701f716a1e2/volumes" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.214384 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 23 12:15:52 crc kubenswrapper[4865]: W0123 12:15:52.226942 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8060e11e_ba4c_4175_9866_e9d61f246492.slice/crio-ce24bdc0a8be69db2c4df1cb33108a9bfebada7d5f92dc12d8f1a7b293983809 WatchSource:0}: Error finding container ce24bdc0a8be69db2c4df1cb33108a9bfebada7d5f92dc12d8f1a7b293983809: Status 404 returned error can't find the container with id ce24bdc0a8be69db2c4df1cb33108a9bfebada7d5f92dc12d8f1a7b293983809 Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.330333 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8060e11e-ba4c-4175-9866-e9d61f246492","Type":"ContainerStarted","Data":"ce24bdc0a8be69db2c4df1cb33108a9bfebada7d5f92dc12d8f1a7b293983809"} Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.330587 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.340317 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.558635 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbff7fb87-g98dn"] Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.561242 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.662998 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s6lh\" (UniqueName: \"kubernetes.io/projected/b0f4e90e-919f-4354-b584-a1516961888c-kube-api-access-9s6lh\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.663452 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-config\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.663548 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.663690 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-svc\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.663766 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.663853 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.666803 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbff7fb87-g98dn"] Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.765929 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s6lh\" (UniqueName: \"kubernetes.io/projected/b0f4e90e-919f-4354-b584-a1516961888c-kube-api-access-9s6lh\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.766276 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-config\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.766314 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.766370 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-svc\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.766389 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.766425 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.767925 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-swift-storage-0\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.767985 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-svc\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.768461 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-config\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.768757 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.769740 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.817378 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s6lh\" (UniqueName: \"kubernetes.io/projected/b0f4e90e-919f-4354-b584-a1516961888c-kube-api-access-9s6lh\") pod \"dnsmasq-dns-5fbff7fb87-g98dn\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:52 crc kubenswrapper[4865]: I0123 12:15:52.900014 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:53 crc kubenswrapper[4865]: I0123 12:15:53.342014 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8060e11e-ba4c-4175-9866-e9d61f246492","Type":"ContainerStarted","Data":"c8e06d7713e3194211cee01644a45f864cc1a4ddb358ad0544dc7ae9e317be3e"} Jan 23 12:15:53 crc kubenswrapper[4865]: I0123 12:15:53.365270 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.365248968 podStartE2EDuration="2.365248968s" podCreationTimestamp="2026-01-23 12:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:53.358452594 +0000 UTC m=+1397.527524820" watchObservedRunningTime="2026-01-23 12:15:53.365248968 +0000 UTC m=+1397.534321194" Jan 23 12:15:53 crc kubenswrapper[4865]: I0123 12:15:53.384788 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbff7fb87-g98dn"] Jan 23 12:15:54 crc kubenswrapper[4865]: I0123 12:15:54.353211 4865 generic.go:334] "Generic (PLEG): container finished" podID="b0f4e90e-919f-4354-b584-a1516961888c" containerID="fdb6733c6cd5d6cf86c30e84748f0b395a0251ac5e3f58cc75142286fd5b5222" exitCode=0 Jan 23 12:15:54 crc kubenswrapper[4865]: I0123 12:15:54.353494 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" event={"ID":"b0f4e90e-919f-4354-b584-a1516961888c","Type":"ContainerDied","Data":"fdb6733c6cd5d6cf86c30e84748f0b395a0251ac5e3f58cc75142286fd5b5222"} Jan 23 12:15:54 crc kubenswrapper[4865]: I0123 12:15:54.353557 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" event={"ID":"b0f4e90e-919f-4354-b584-a1516961888c","Type":"ContainerStarted","Data":"db276088ff7a99d7275f946f870f334029878766562a783d968682273f395889"} Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.100275 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.101080 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="ceilometer-central-agent" containerID="cri-o://f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174" gracePeriod=30 Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.102127 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="sg-core" containerID="cri-o://4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15" gracePeriod=30 Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.102249 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="proxy-httpd" containerID="cri-o://bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250" gracePeriod=30 Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.102340 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="ceilometer-notification-agent" containerID="cri-o://920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510" gracePeriod=30 Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.217498 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": read tcp 10.217.0.2:33980->10.217.0.209:3000: read: connection reset by peer" Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.217935 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": dial tcp 10.217.0.209:3000: connect: connection refused" Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.389711 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.390988 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" event={"ID":"b0f4e90e-919f-4354-b584-a1516961888c","Type":"ContainerStarted","Data":"20ae29404706f7a2bb46907205e967857b09335143038b6c05b3a6561337384a"} Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.391371 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.396067 4865 generic.go:334] "Generic (PLEG): container finished" podID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerID="bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250" exitCode=0 Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.396096 4865 generic.go:334] "Generic (PLEG): container finished" podID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerID="4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15" exitCode=2 Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.396269 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerName="nova-api-log" containerID="cri-o://d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999" gracePeriod=30 Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.396491 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e2777c-1c3a-4972-981e-7560fd820f7b","Type":"ContainerDied","Data":"bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250"} Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.396518 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e2777c-1c3a-4972-981e-7560fd820f7b","Type":"ContainerDied","Data":"4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15"} Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.396593 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerName="nova-api-api" containerID="cri-o://fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e" gracePeriod=30 Jan 23 12:15:55 crc kubenswrapper[4865]: I0123 12:15:55.426752 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" podStartSLOduration=3.426733185 podStartE2EDuration="3.426733185s" podCreationTimestamp="2026-01-23 12:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:15:55.418283851 +0000 UTC m=+1399.587356087" watchObservedRunningTime="2026-01-23 12:15:55.426733185 +0000 UTC m=+1399.595805411" Jan 23 12:15:56 crc kubenswrapper[4865]: I0123 12:15:56.413203 4865 generic.go:334] "Generic (PLEG): container finished" podID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerID="d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999" exitCode=143 Jan 23 12:15:56 crc kubenswrapper[4865]: I0123 12:15:56.413315 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb131867-ea3a-406e-adf4-6c2116ff3a5e","Type":"ContainerDied","Data":"d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999"} Jan 23 12:15:56 crc kubenswrapper[4865]: I0123 12:15:56.422049 4865 generic.go:334] "Generic (PLEG): container finished" podID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerID="f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174" exitCode=0 Jan 23 12:15:56 crc kubenswrapper[4865]: I0123 12:15:56.422116 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e2777c-1c3a-4972-981e-7560fd820f7b","Type":"ContainerDied","Data":"f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174"} Jan 23 12:15:56 crc kubenswrapper[4865]: I0123 12:15:56.734468 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.037389 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.148402 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-config-data\") pod \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.148570 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-combined-ca-bundle\") pod \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.148629 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb131867-ea3a-406e-adf4-6c2116ff3a5e-logs\") pod \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.148681 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7rz2\" (UniqueName: \"kubernetes.io/projected/fb131867-ea3a-406e-adf4-6c2116ff3a5e-kube-api-access-k7rz2\") pod \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\" (UID: \"fb131867-ea3a-406e-adf4-6c2116ff3a5e\") " Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.149488 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb131867-ea3a-406e-adf4-6c2116ff3a5e-logs" (OuterVolumeSpecName: "logs") pod "fb131867-ea3a-406e-adf4-6c2116ff3a5e" (UID: "fb131867-ea3a-406e-adf4-6c2116ff3a5e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.180233 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb131867-ea3a-406e-adf4-6c2116ff3a5e-kube-api-access-k7rz2" (OuterVolumeSpecName: "kube-api-access-k7rz2") pod "fb131867-ea3a-406e-adf4-6c2116ff3a5e" (UID: "fb131867-ea3a-406e-adf4-6c2116ff3a5e"). InnerVolumeSpecName "kube-api-access-k7rz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.184151 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb131867-ea3a-406e-adf4-6c2116ff3a5e" (UID: "fb131867-ea3a-406e-adf4-6c2116ff3a5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.202780 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-config-data" (OuterVolumeSpecName: "config-data") pod "fb131867-ea3a-406e-adf4-6c2116ff3a5e" (UID: "fb131867-ea3a-406e-adf4-6c2116ff3a5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.250813 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb131867-ea3a-406e-adf4-6c2116ff3a5e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.251113 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7rz2\" (UniqueName: \"kubernetes.io/projected/fb131867-ea3a-406e-adf4-6c2116ff3a5e-kube-api-access-k7rz2\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.251124 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.251133 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb131867-ea3a-406e-adf4-6c2116ff3a5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.445825 4865 generic.go:334] "Generic (PLEG): container finished" podID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerID="fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e" exitCode=0 Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.445866 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb131867-ea3a-406e-adf4-6c2116ff3a5e","Type":"ContainerDied","Data":"fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e"} Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.445891 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fb131867-ea3a-406e-adf4-6c2116ff3a5e","Type":"ContainerDied","Data":"120a7b3eec3077aaf696e86b2fde1fa14d3768dda2e58ab6c9c6c504981034cb"} Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.445907 4865 scope.go:117] "RemoveContainer" containerID="fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.446025 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.472050 4865 scope.go:117] "RemoveContainer" containerID="d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.485486 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.498061 4865 scope.go:117] "RemoveContainer" containerID="fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e" Jan 23 12:15:59 crc kubenswrapper[4865]: E0123 12:15:59.498415 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e\": container with ID starting with fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e not found: ID does not exist" containerID="fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.498445 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e"} err="failed to get container status \"fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e\": rpc error: code = NotFound desc = could not find container \"fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e\": container with ID starting with fa9097c15fa05210bb1c5d1310d13aa5be5108ccbb4e4c6616ee4ebcbb1d8b9e not found: ID does not exist" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.498465 4865 scope.go:117] "RemoveContainer" containerID="d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999" Jan 23 12:15:59 crc kubenswrapper[4865]: E0123 12:15:59.498835 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999\": container with ID starting with d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999 not found: ID does not exist" containerID="d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.498862 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999"} err="failed to get container status \"d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999\": rpc error: code = NotFound desc = could not find container \"d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999\": container with ID starting with d4d7efdd66d0085f867366ef9518652ca3f891fc107e39daab7862ba1f3d1999 not found: ID does not exist" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.502483 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.516198 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:59 crc kubenswrapper[4865]: E0123 12:15:59.516672 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerName="nova-api-log" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.516694 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerName="nova-api-log" Jan 23 12:15:59 crc kubenswrapper[4865]: E0123 12:15:59.516714 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerName="nova-api-api" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.516722 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerName="nova-api-api" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.516910 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerName="nova-api-api" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.516929 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" containerName="nova-api-log" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.517950 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.521436 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.521825 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.521995 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.525747 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.658261 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef5590a-c39c-4217-a5ca-14e2deb926b3-logs\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.658323 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.658392 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.658690 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-config-data\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.658882 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hjkn\" (UniqueName: \"kubernetes.io/projected/cef5590a-c39c-4217-a5ca-14e2deb926b3-kube-api-access-2hjkn\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.658966 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-public-tls-certs\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.760965 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-public-tls-certs\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.761052 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef5590a-c39c-4217-a5ca-14e2deb926b3-logs\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.761087 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.761152 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.761234 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-config-data\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.761291 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hjkn\" (UniqueName: \"kubernetes.io/projected/cef5590a-c39c-4217-a5ca-14e2deb926b3-kube-api-access-2hjkn\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.761658 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef5590a-c39c-4217-a5ca-14e2deb926b3-logs\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.766290 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-public-tls-certs\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.766939 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.771590 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-config-data\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.780580 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.784674 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hjkn\" (UniqueName: \"kubernetes.io/projected/cef5590a-c39c-4217-a5ca-14e2deb926b3-kube-api-access-2hjkn\") pod \"nova-api-0\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " pod="openstack/nova-api-0" Jan 23 12:15:59 crc kubenswrapper[4865]: I0123 12:15:59.849342 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.133400 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb131867-ea3a-406e-adf4-6c2116ff3a5e" path="/var/lib/kubelet/pods/fb131867-ea3a-406e-adf4-6c2116ff3a5e/volumes" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.169647 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.272001 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nhlc\" (UniqueName: \"kubernetes.io/projected/88e2777c-1c3a-4972-981e-7560fd820f7b-kube-api-access-5nhlc\") pod \"88e2777c-1c3a-4972-981e-7560fd820f7b\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.272103 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-combined-ca-bundle\") pod \"88e2777c-1c3a-4972-981e-7560fd820f7b\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.272213 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-log-httpd\") pod \"88e2777c-1c3a-4972-981e-7560fd820f7b\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.272270 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-scripts\") pod \"88e2777c-1c3a-4972-981e-7560fd820f7b\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.272285 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-config-data\") pod \"88e2777c-1c3a-4972-981e-7560fd820f7b\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.272641 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-sg-core-conf-yaml\") pod \"88e2777c-1c3a-4972-981e-7560fd820f7b\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.272680 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-run-httpd\") pod \"88e2777c-1c3a-4972-981e-7560fd820f7b\" (UID: \"88e2777c-1c3a-4972-981e-7560fd820f7b\") " Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.280833 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-scripts" (OuterVolumeSpecName: "scripts") pod "88e2777c-1c3a-4972-981e-7560fd820f7b" (UID: "88e2777c-1c3a-4972-981e-7560fd820f7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.283469 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "88e2777c-1c3a-4972-981e-7560fd820f7b" (UID: "88e2777c-1c3a-4972-981e-7560fd820f7b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.283631 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e2777c-1c3a-4972-981e-7560fd820f7b-kube-api-access-5nhlc" (OuterVolumeSpecName: "kube-api-access-5nhlc") pod "88e2777c-1c3a-4972-981e-7560fd820f7b" (UID: "88e2777c-1c3a-4972-981e-7560fd820f7b"). InnerVolumeSpecName "kube-api-access-5nhlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.283976 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "88e2777c-1c3a-4972-981e-7560fd820f7b" (UID: "88e2777c-1c3a-4972-981e-7560fd820f7b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.332629 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "88e2777c-1c3a-4972-981e-7560fd820f7b" (UID: "88e2777c-1c3a-4972-981e-7560fd820f7b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.372306 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88e2777c-1c3a-4972-981e-7560fd820f7b" (UID: "88e2777c-1c3a-4972-981e-7560fd820f7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.374785 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.374825 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.374837 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.374847 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nhlc\" (UniqueName: \"kubernetes.io/projected/88e2777c-1c3a-4972-981e-7560fd820f7b-kube-api-access-5nhlc\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.374857 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.374867 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e2777c-1c3a-4972-981e-7560fd820f7b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.406703 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-config-data" (OuterVolumeSpecName: "config-data") pod "88e2777c-1c3a-4972-981e-7560fd820f7b" (UID: "88e2777c-1c3a-4972-981e-7560fd820f7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.459480 4865 generic.go:334] "Generic (PLEG): container finished" podID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerID="920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510" exitCode=0 Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.459895 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e2777c-1c3a-4972-981e-7560fd820f7b","Type":"ContainerDied","Data":"920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510"} Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.459987 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88e2777c-1c3a-4972-981e-7560fd820f7b","Type":"ContainerDied","Data":"2c4ca15e1667776ce2ace712ff0d644389a6317c7023cfd99bba6db7126d2ddd"} Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.460126 4865 scope.go:117] "RemoveContainer" containerID="bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.460405 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.478732 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e2777c-1c3a-4972-981e-7560fd820f7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.495452 4865 scope.go:117] "RemoveContainer" containerID="4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.508116 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.518238 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.553057 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.553629 4865 scope.go:117] "RemoveContainer" containerID="920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510" Jan 23 12:16:00 crc kubenswrapper[4865]: E0123 12:16:00.555333 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="ceilometer-central-agent" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.555363 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="ceilometer-central-agent" Jan 23 12:16:00 crc kubenswrapper[4865]: E0123 12:16:00.555380 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="sg-core" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.555386 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="sg-core" Jan 23 12:16:00 crc kubenswrapper[4865]: E0123 12:16:00.555405 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="proxy-httpd" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.555411 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="proxy-httpd" Jan 23 12:16:00 crc kubenswrapper[4865]: E0123 12:16:00.555421 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="ceilometer-notification-agent" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.555427 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="ceilometer-notification-agent" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.555593 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="proxy-httpd" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.555620 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="ceilometer-central-agent" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.555627 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="sg-core" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.555647 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" containerName="ceilometer-notification-agent" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.562375 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.567937 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.568150 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.601272 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:00 crc kubenswrapper[4865]: W0123 12:16:00.609039 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcef5590a_c39c_4217_a5ca_14e2deb926b3.slice/crio-2d7518abe9399e7021d20842b3097e0f561017ccebc8ff59f852ea7447fe81de WatchSource:0}: Error finding container 2d7518abe9399e7021d20842b3097e0f561017ccebc8ff59f852ea7447fe81de: Status 404 returned error can't find the container with id 2d7518abe9399e7021d20842b3097e0f561017ccebc8ff59f852ea7447fe81de Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.622041 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.624463 4865 scope.go:117] "RemoveContainer" containerID="f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.660379 4865 scope.go:117] "RemoveContainer" containerID="bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250" Jan 23 12:16:00 crc kubenswrapper[4865]: E0123 12:16:00.662871 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250\": container with ID starting with bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250 not found: ID does not exist" containerID="bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.662929 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250"} err="failed to get container status \"bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250\": rpc error: code = NotFound desc = could not find container \"bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250\": container with ID starting with bece300c2d18d439ef88b2155955b3b9a0185aa66015169a5da5c639501f8250 not found: ID does not exist" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.662971 4865 scope.go:117] "RemoveContainer" containerID="4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15" Jan 23 12:16:00 crc kubenswrapper[4865]: E0123 12:16:00.663494 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15\": container with ID starting with 4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15 not found: ID does not exist" containerID="4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.663531 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15"} err="failed to get container status \"4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15\": rpc error: code = NotFound desc = could not find container \"4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15\": container with ID starting with 4bc036aa35cb0587d987e893171d8ae17bc0fdf06e4660f145736de9462d9e15 not found: ID does not exist" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.663562 4865 scope.go:117] "RemoveContainer" containerID="920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510" Jan 23 12:16:00 crc kubenswrapper[4865]: E0123 12:16:00.664301 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510\": container with ID starting with 920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510 not found: ID does not exist" containerID="920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.664359 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510"} err="failed to get container status \"920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510\": rpc error: code = NotFound desc = could not find container \"920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510\": container with ID starting with 920b69a98922340b2be2cb66e20dd032f29c4dca6d784422989fa82598483510 not found: ID does not exist" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.664396 4865 scope.go:117] "RemoveContainer" containerID="f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174" Jan 23 12:16:00 crc kubenswrapper[4865]: E0123 12:16:00.664757 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174\": container with ID starting with f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174 not found: ID does not exist" containerID="f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.664789 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174"} err="failed to get container status \"f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174\": rpc error: code = NotFound desc = could not find container \"f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174\": container with ID starting with f929ec14a1519c2cc49f84087835ba419baa48c985e7dc09ffa598ec45d9c174 not found: ID does not exist" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.684301 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-config-data\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.684366 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.684413 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-scripts\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.684451 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-log-httpd\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.684850 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-run-httpd\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.684979 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvwn8\" (UniqueName: \"kubernetes.io/projected/eb56a942-823d-4d71-a224-cf157da9d100-kube-api-access-cvwn8\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.685286 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.786844 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.786895 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-config-data\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.786934 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.786970 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-scripts\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.787001 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-log-httpd\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.787046 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-run-httpd\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.787071 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvwn8\" (UniqueName: \"kubernetes.io/projected/eb56a942-823d-4d71-a224-cf157da9d100-kube-api-access-cvwn8\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.788683 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-log-httpd\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.790939 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-run-httpd\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.792255 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.792672 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-scripts\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.793371 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.806854 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-config-data\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.809173 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvwn8\" (UniqueName: \"kubernetes.io/projected/eb56a942-823d-4d71-a224-cf157da9d100-kube-api-access-cvwn8\") pod \"ceilometer-0\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " pod="openstack/ceilometer-0" Jan 23 12:16:00 crc kubenswrapper[4865]: I0123 12:16:00.896453 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:16:01 crc kubenswrapper[4865]: I0123 12:16:01.391180 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:01 crc kubenswrapper[4865]: W0123 12:16:01.403118 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb56a942_823d_4d71_a224_cf157da9d100.slice/crio-273a8dd7def25d477014b20f33010c01ca8d8219e1c4629c821a051519f8fc87 WatchSource:0}: Error finding container 273a8dd7def25d477014b20f33010c01ca8d8219e1c4629c821a051519f8fc87: Status 404 returned error can't find the container with id 273a8dd7def25d477014b20f33010c01ca8d8219e1c4629c821a051519f8fc87 Jan 23 12:16:01 crc kubenswrapper[4865]: I0123 12:16:01.507340 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb56a942-823d-4d71-a224-cf157da9d100","Type":"ContainerStarted","Data":"273a8dd7def25d477014b20f33010c01ca8d8219e1c4629c821a051519f8fc87"} Jan 23 12:16:01 crc kubenswrapper[4865]: I0123 12:16:01.509836 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cef5590a-c39c-4217-a5ca-14e2deb926b3","Type":"ContainerStarted","Data":"f6e5649bdde44cc0592947a667131512767e4904a68845d43019f11b670480dc"} Jan 23 12:16:01 crc kubenswrapper[4865]: I0123 12:16:01.509867 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cef5590a-c39c-4217-a5ca-14e2deb926b3","Type":"ContainerStarted","Data":"0e909e43950b6fecddb9331effaed73e78217dffae7430136b36f7f69c14d54f"} Jan 23 12:16:01 crc kubenswrapper[4865]: I0123 12:16:01.509879 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cef5590a-c39c-4217-a5ca-14e2deb926b3","Type":"ContainerStarted","Data":"2d7518abe9399e7021d20842b3097e0f561017ccebc8ff59f852ea7447fe81de"} Jan 23 12:16:01 crc kubenswrapper[4865]: I0123 12:16:01.530931 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.530914492 podStartE2EDuration="2.530914492s" podCreationTimestamp="2026-01-23 12:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:16:01.529741564 +0000 UTC m=+1405.698813790" watchObservedRunningTime="2026-01-23 12:16:01.530914492 +0000 UTC m=+1405.699986708" Jan 23 12:16:01 crc kubenswrapper[4865]: I0123 12:16:01.734536 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:16:01 crc kubenswrapper[4865]: I0123 12:16:01.756934 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.130823 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e2777c-1c3a-4972-981e-7560fd820f7b" path="/var/lib/kubelet/pods/88e2777c-1c3a-4972-981e-7560fd820f7b/volumes" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.523094 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb56a942-823d-4d71-a224-cf157da9d100","Type":"ContainerStarted","Data":"8492ea2fbac475cc90eb07dee3db0e8cb5199f2e11f57794bebb09cabd933c19"} Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.542914 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.722314 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tmtvq"] Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.723630 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.726937 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.727129 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.748479 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tmtvq"] Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.825412 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.825506 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-config-data\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.825540 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-scripts\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.825623 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlfkb\" (UniqueName: \"kubernetes.io/projected/845c3ecd-d398-4524-b2ef-ff90c88fb498-kube-api-access-qlfkb\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.901736 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.936234 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlfkb\" (UniqueName: \"kubernetes.io/projected/845c3ecd-d398-4524-b2ef-ff90c88fb498-kube-api-access-qlfkb\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.936537 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.936660 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-config-data\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.936698 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-scripts\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.942954 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.942980 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-config-data\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.944034 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-scripts\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.960191 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlfkb\" (UniqueName: \"kubernetes.io/projected/845c3ecd-d398-4524-b2ef-ff90c88fb498-kube-api-access-qlfkb\") pod \"nova-cell1-cell-mapping-tmtvq\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.985564 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bbfd6cbff-hnkp4"] Jan 23 12:16:02 crc kubenswrapper[4865]: I0123 12:16:02.985800 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" podUID="c29c1861-9f5c-484e-87b9-13f34ea426d5" containerName="dnsmasq-dns" containerID="cri-o://ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862" gracePeriod=10 Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.046924 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.495965 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.546215 4865 generic.go:334] "Generic (PLEG): container finished" podID="c29c1861-9f5c-484e-87b9-13f34ea426d5" containerID="ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862" exitCode=0 Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.546276 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" event={"ID":"c29c1861-9f5c-484e-87b9-13f34ea426d5","Type":"ContainerDied","Data":"ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862"} Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.546302 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" event={"ID":"c29c1861-9f5c-484e-87b9-13f34ea426d5","Type":"ContainerDied","Data":"ab5644c745ec96a554cb2d241a72759ff1dec8b87115cda9f8b03b4af51f998f"} Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.546319 4865 scope.go:117] "RemoveContainer" containerID="ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.546431 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbfd6cbff-hnkp4" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.551206 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb56a942-823d-4d71-a224-cf157da9d100","Type":"ContainerStarted","Data":"5b372db41521c1d71334cef8fd8bbd524b55f496ffbd8f1a7288be8473a57011"} Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.574800 4865 scope.go:117] "RemoveContainer" containerID="0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.613944 4865 scope.go:117] "RemoveContainer" containerID="ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862" Jan 23 12:16:03 crc kubenswrapper[4865]: E0123 12:16:03.615966 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862\": container with ID starting with ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862 not found: ID does not exist" containerID="ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.616024 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862"} err="failed to get container status \"ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862\": rpc error: code = NotFound desc = could not find container \"ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862\": container with ID starting with ed88e97efdeac125d2090857b7a09c5cafab7e5e780ee02641afaf4a38547862 not found: ID does not exist" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.616049 4865 scope.go:117] "RemoveContainer" containerID="0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903" Jan 23 12:16:03 crc kubenswrapper[4865]: E0123 12:16:03.616391 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903\": container with ID starting with 0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903 not found: ID does not exist" containerID="0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.616426 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903"} err="failed to get container status \"0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903\": rpc error: code = NotFound desc = could not find container \"0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903\": container with ID starting with 0c5c10942b9d312870836a739aad096a71cdb18c034b208d4ded5e8db5204903 not found: ID does not exist" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.641427 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tmtvq"] Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.652858 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-svc\") pod \"c29c1861-9f5c-484e-87b9-13f34ea426d5\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.652915 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggkxs\" (UniqueName: \"kubernetes.io/projected/c29c1861-9f5c-484e-87b9-13f34ea426d5-kube-api-access-ggkxs\") pod \"c29c1861-9f5c-484e-87b9-13f34ea426d5\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.652952 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-sb\") pod \"c29c1861-9f5c-484e-87b9-13f34ea426d5\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.652991 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-config\") pod \"c29c1861-9f5c-484e-87b9-13f34ea426d5\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.653051 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-nb\") pod \"c29c1861-9f5c-484e-87b9-13f34ea426d5\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.653231 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-swift-storage-0\") pod \"c29c1861-9f5c-484e-87b9-13f34ea426d5\" (UID: \"c29c1861-9f5c-484e-87b9-13f34ea426d5\") " Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.671527 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29c1861-9f5c-484e-87b9-13f34ea426d5-kube-api-access-ggkxs" (OuterVolumeSpecName: "kube-api-access-ggkxs") pod "c29c1861-9f5c-484e-87b9-13f34ea426d5" (UID: "c29c1861-9f5c-484e-87b9-13f34ea426d5"). InnerVolumeSpecName "kube-api-access-ggkxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.712655 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c29c1861-9f5c-484e-87b9-13f34ea426d5" (UID: "c29c1861-9f5c-484e-87b9-13f34ea426d5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.720466 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c29c1861-9f5c-484e-87b9-13f34ea426d5" (UID: "c29c1861-9f5c-484e-87b9-13f34ea426d5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.721383 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c29c1861-9f5c-484e-87b9-13f34ea426d5" (UID: "c29c1861-9f5c-484e-87b9-13f34ea426d5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.731197 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c29c1861-9f5c-484e-87b9-13f34ea426d5" (UID: "c29c1861-9f5c-484e-87b9-13f34ea426d5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.755950 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.755981 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.755993 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.756006 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.756020 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggkxs\" (UniqueName: \"kubernetes.io/projected/c29c1861-9f5c-484e-87b9-13f34ea426d5-kube-api-access-ggkxs\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.759323 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-config" (OuterVolumeSpecName: "config") pod "c29c1861-9f5c-484e-87b9-13f34ea426d5" (UID: "c29c1861-9f5c-484e-87b9-13f34ea426d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.858146 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c29c1861-9f5c-484e-87b9-13f34ea426d5-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.889711 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bbfd6cbff-hnkp4"] Jan 23 12:16:03 crc kubenswrapper[4865]: I0123 12:16:03.927184 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bbfd6cbff-hnkp4"] Jan 23 12:16:04 crc kubenswrapper[4865]: I0123 12:16:04.129872 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29c1861-9f5c-484e-87b9-13f34ea426d5" path="/var/lib/kubelet/pods/c29c1861-9f5c-484e-87b9-13f34ea426d5/volumes" Jan 23 12:16:04 crc kubenswrapper[4865]: I0123 12:16:04.559305 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tmtvq" event={"ID":"845c3ecd-d398-4524-b2ef-ff90c88fb498","Type":"ContainerStarted","Data":"5d07687820a4c583231d0b28e336f409654bde9c6d7446b9955828005453d04e"} Jan 23 12:16:04 crc kubenswrapper[4865]: I0123 12:16:04.559360 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tmtvq" event={"ID":"845c3ecd-d398-4524-b2ef-ff90c88fb498","Type":"ContainerStarted","Data":"a11baa39b018775aa03b130b8fa3e1a3adbc49afc4fa0875b54947786f0fc66e"} Jan 23 12:16:04 crc kubenswrapper[4865]: I0123 12:16:04.562517 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb56a942-823d-4d71-a224-cf157da9d100","Type":"ContainerStarted","Data":"1170bc7a74c1938851ffbe7a8ec31fda85fcc09f6471d22fca7063a9167e7163"} Jan 23 12:16:04 crc kubenswrapper[4865]: I0123 12:16:04.576683 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tmtvq" podStartSLOduration=2.576593136 podStartE2EDuration="2.576593136s" podCreationTimestamp="2026-01-23 12:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:16:04.573714927 +0000 UTC m=+1408.742787153" watchObservedRunningTime="2026-01-23 12:16:04.576593136 +0000 UTC m=+1408.745665362" Jan 23 12:16:06 crc kubenswrapper[4865]: I0123 12:16:06.586062 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb56a942-823d-4d71-a224-cf157da9d100","Type":"ContainerStarted","Data":"14c4874b153f143e6ac61f8a7e7840aca7d6e6d8bd8521f39a0d633f5456ac3f"} Jan 23 12:16:06 crc kubenswrapper[4865]: I0123 12:16:06.586545 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 12:16:09 crc kubenswrapper[4865]: I0123 12:16:09.614299 4865 generic.go:334] "Generic (PLEG): container finished" podID="845c3ecd-d398-4524-b2ef-ff90c88fb498" containerID="5d07687820a4c583231d0b28e336f409654bde9c6d7446b9955828005453d04e" exitCode=0 Jan 23 12:16:09 crc kubenswrapper[4865]: I0123 12:16:09.614385 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tmtvq" event={"ID":"845c3ecd-d398-4524-b2ef-ff90c88fb498","Type":"ContainerDied","Data":"5d07687820a4c583231d0b28e336f409654bde9c6d7446b9955828005453d04e"} Jan 23 12:16:09 crc kubenswrapper[4865]: I0123 12:16:09.637910 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.950917936 podStartE2EDuration="9.637887952s" podCreationTimestamp="2026-01-23 12:16:00 +0000 UTC" firstStartedPulling="2026-01-23 12:16:01.40674579 +0000 UTC m=+1405.575818016" lastFinishedPulling="2026-01-23 12:16:06.093715806 +0000 UTC m=+1410.262788032" observedRunningTime="2026-01-23 12:16:06.620352046 +0000 UTC m=+1410.789424272" watchObservedRunningTime="2026-01-23 12:16:09.637887952 +0000 UTC m=+1413.806960208" Jan 23 12:16:09 crc kubenswrapper[4865]: I0123 12:16:09.849828 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 12:16:09 crc kubenswrapper[4865]: I0123 12:16:09.849906 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 12:16:10 crc kubenswrapper[4865]: I0123 12:16:10.864315 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.216:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:16:10 crc kubenswrapper[4865]: I0123 12:16:10.864362 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.216:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.067497 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.240125 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-config-data\") pod \"845c3ecd-d398-4524-b2ef-ff90c88fb498\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.240254 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-scripts\") pod \"845c3ecd-d398-4524-b2ef-ff90c88fb498\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.240377 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-combined-ca-bundle\") pod \"845c3ecd-d398-4524-b2ef-ff90c88fb498\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.240577 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlfkb\" (UniqueName: \"kubernetes.io/projected/845c3ecd-d398-4524-b2ef-ff90c88fb498-kube-api-access-qlfkb\") pod \"845c3ecd-d398-4524-b2ef-ff90c88fb498\" (UID: \"845c3ecd-d398-4524-b2ef-ff90c88fb498\") " Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.248856 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845c3ecd-d398-4524-b2ef-ff90c88fb498-kube-api-access-qlfkb" (OuterVolumeSpecName: "kube-api-access-qlfkb") pod "845c3ecd-d398-4524-b2ef-ff90c88fb498" (UID: "845c3ecd-d398-4524-b2ef-ff90c88fb498"). InnerVolumeSpecName "kube-api-access-qlfkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.260774 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-scripts" (OuterVolumeSpecName: "scripts") pod "845c3ecd-d398-4524-b2ef-ff90c88fb498" (UID: "845c3ecd-d398-4524-b2ef-ff90c88fb498"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.276729 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-config-data" (OuterVolumeSpecName: "config-data") pod "845c3ecd-d398-4524-b2ef-ff90c88fb498" (UID: "845c3ecd-d398-4524-b2ef-ff90c88fb498"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.276747 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "845c3ecd-d398-4524-b2ef-ff90c88fb498" (UID: "845c3ecd-d398-4524-b2ef-ff90c88fb498"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.343562 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlfkb\" (UniqueName: \"kubernetes.io/projected/845c3ecd-d398-4524-b2ef-ff90c88fb498-kube-api-access-qlfkb\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.343660 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.343674 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.343686 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/845c3ecd-d398-4524-b2ef-ff90c88fb498-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.636787 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tmtvq" event={"ID":"845c3ecd-d398-4524-b2ef-ff90c88fb498","Type":"ContainerDied","Data":"a11baa39b018775aa03b130b8fa3e1a3adbc49afc4fa0875b54947786f0fc66e"} Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.636840 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a11baa39b018775aa03b130b8fa3e1a3adbc49afc4fa0875b54947786f0fc66e" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.636912 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tmtvq" Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.834268 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.834538 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerName="nova-api-log" containerID="cri-o://0e909e43950b6fecddb9331effaed73e78217dffae7430136b36f7f69c14d54f" gracePeriod=30 Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.834648 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerName="nova-api-api" containerID="cri-o://f6e5649bdde44cc0592947a667131512767e4904a68845d43019f11b670480dc" gracePeriod=30 Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.870285 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.870536 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b550b10b-74fe-4f27-92c8-011dd04e87e0" containerName="nova-scheduler-scheduler" containerID="cri-o://c26f067d3e2f7b3224813d43107ffc59c9cb10c26f18831dfb689b39ad0809f9" gracePeriod=30 Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.884789 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.885079 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-log" containerID="cri-o://6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b" gracePeriod=30 Jan 23 12:16:11 crc kubenswrapper[4865]: I0123 12:16:11.885253 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-metadata" containerID="cri-o://d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7" gracePeriod=30 Jan 23 12:16:12 crc kubenswrapper[4865]: I0123 12:16:12.648682 4865 generic.go:334] "Generic (PLEG): container finished" podID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerID="0e909e43950b6fecddb9331effaed73e78217dffae7430136b36f7f69c14d54f" exitCode=143 Jan 23 12:16:12 crc kubenswrapper[4865]: I0123 12:16:12.648732 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cef5590a-c39c-4217-a5ca-14e2deb926b3","Type":"ContainerDied","Data":"0e909e43950b6fecddb9331effaed73e78217dffae7430136b36f7f69c14d54f"} Jan 23 12:16:12 crc kubenswrapper[4865]: I0123 12:16:12.656686 4865 generic.go:334] "Generic (PLEG): container finished" podID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerID="6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b" exitCode=143 Jan 23 12:16:12 crc kubenswrapper[4865]: I0123 12:16:12.656737 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b1b266-e09f-40b7-800d-d95db2ad4632","Type":"ContainerDied","Data":"6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b"} Jan 23 12:16:14 crc kubenswrapper[4865]: I0123 12:16:14.678056 4865 generic.go:334] "Generic (PLEG): container finished" podID="b550b10b-74fe-4f27-92c8-011dd04e87e0" containerID="c26f067d3e2f7b3224813d43107ffc59c9cb10c26f18831dfb689b39ad0809f9" exitCode=0 Jan 23 12:16:14 crc kubenswrapper[4865]: I0123 12:16:14.678252 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b550b10b-74fe-4f27-92c8-011dd04e87e0","Type":"ContainerDied","Data":"c26f067d3e2f7b3224813d43107ffc59c9cb10c26f18831dfb689b39ad0809f9"} Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.030582 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.151651 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-combined-ca-bundle\") pod \"b550b10b-74fe-4f27-92c8-011dd04e87e0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.151874 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-config-data\") pod \"b550b10b-74fe-4f27-92c8-011dd04e87e0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.164851 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsl2t\" (UniqueName: \"kubernetes.io/projected/b550b10b-74fe-4f27-92c8-011dd04e87e0-kube-api-access-jsl2t\") pod \"b550b10b-74fe-4f27-92c8-011dd04e87e0\" (UID: \"b550b10b-74fe-4f27-92c8-011dd04e87e0\") " Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.176707 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b550b10b-74fe-4f27-92c8-011dd04e87e0-kube-api-access-jsl2t" (OuterVolumeSpecName: "kube-api-access-jsl2t") pod "b550b10b-74fe-4f27-92c8-011dd04e87e0" (UID: "b550b10b-74fe-4f27-92c8-011dd04e87e0"). InnerVolumeSpecName "kube-api-access-jsl2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.185416 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-config-data" (OuterVolumeSpecName: "config-data") pod "b550b10b-74fe-4f27-92c8-011dd04e87e0" (UID: "b550b10b-74fe-4f27-92c8-011dd04e87e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.191453 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": dial tcp 10.217.0.210:8775: connect: connection refused" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.192835 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": dial tcp 10.217.0.210:8775: connect: connection refused" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.196814 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b550b10b-74fe-4f27-92c8-011dd04e87e0" (UID: "b550b10b-74fe-4f27-92c8-011dd04e87e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.268919 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.268956 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsl2t\" (UniqueName: \"kubernetes.io/projected/b550b10b-74fe-4f27-92c8-011dd04e87e0-kube-api-access-jsl2t\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.268968 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b550b10b-74fe-4f27-92c8-011dd04e87e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.445578 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.574677 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-combined-ca-bundle\") pod \"c4b1b266-e09f-40b7-800d-d95db2ad4632\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.574738 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xj8s\" (UniqueName: \"kubernetes.io/projected/c4b1b266-e09f-40b7-800d-d95db2ad4632-kube-api-access-4xj8s\") pod \"c4b1b266-e09f-40b7-800d-d95db2ad4632\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.574796 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4b1b266-e09f-40b7-800d-d95db2ad4632-logs\") pod \"c4b1b266-e09f-40b7-800d-d95db2ad4632\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.574869 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-nova-metadata-tls-certs\") pod \"c4b1b266-e09f-40b7-800d-d95db2ad4632\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.574972 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-config-data\") pod \"c4b1b266-e09f-40b7-800d-d95db2ad4632\" (UID: \"c4b1b266-e09f-40b7-800d-d95db2ad4632\") " Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.575241 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4b1b266-e09f-40b7-800d-d95db2ad4632-logs" (OuterVolumeSpecName: "logs") pod "c4b1b266-e09f-40b7-800d-d95db2ad4632" (UID: "c4b1b266-e09f-40b7-800d-d95db2ad4632"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.575488 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4b1b266-e09f-40b7-800d-d95db2ad4632-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.592071 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b1b266-e09f-40b7-800d-d95db2ad4632-kube-api-access-4xj8s" (OuterVolumeSpecName: "kube-api-access-4xj8s") pod "c4b1b266-e09f-40b7-800d-d95db2ad4632" (UID: "c4b1b266-e09f-40b7-800d-d95db2ad4632"). InnerVolumeSpecName "kube-api-access-4xj8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.607895 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-config-data" (OuterVolumeSpecName: "config-data") pod "c4b1b266-e09f-40b7-800d-d95db2ad4632" (UID: "c4b1b266-e09f-40b7-800d-d95db2ad4632"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.631975 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4b1b266-e09f-40b7-800d-d95db2ad4632" (UID: "c4b1b266-e09f-40b7-800d-d95db2ad4632"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.636924 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c4b1b266-e09f-40b7-800d-d95db2ad4632" (UID: "c4b1b266-e09f-40b7-800d-d95db2ad4632"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.676846 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.676874 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xj8s\" (UniqueName: \"kubernetes.io/projected/c4b1b266-e09f-40b7-800d-d95db2ad4632-kube-api-access-4xj8s\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.676885 4865 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.676895 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4b1b266-e09f-40b7-800d-d95db2ad4632-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.691021 4865 generic.go:334] "Generic (PLEG): container finished" podID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerID="d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7" exitCode=0 Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.691093 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b1b266-e09f-40b7-800d-d95db2ad4632","Type":"ContainerDied","Data":"d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7"} Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.691123 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c4b1b266-e09f-40b7-800d-d95db2ad4632","Type":"ContainerDied","Data":"8ce00c02134c370a45d22e3dd86da5bf1ac2e089ab8c15f166ca368140574bd7"} Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.691142 4865 scope.go:117] "RemoveContainer" containerID="d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.691311 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.701104 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b550b10b-74fe-4f27-92c8-011dd04e87e0","Type":"ContainerDied","Data":"befd31821ce44953ebf5a5d6b02e7a49c1e793a51fcb5aa140144fba40d917f5"} Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.701184 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.769875 4865 scope.go:117] "RemoveContainer" containerID="6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.786421 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.805796 4865 scope.go:117] "RemoveContainer" containerID="d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7" Jan 23 12:16:15 crc kubenswrapper[4865]: E0123 12:16:15.806968 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7\": container with ID starting with d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7 not found: ID does not exist" containerID="d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.807007 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7"} err="failed to get container status \"d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7\": rpc error: code = NotFound desc = could not find container \"d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7\": container with ID starting with d89761bc5bb6d8693f347758d86adcd3dddd3168ff0da93648e2002011c9fdd7 not found: ID does not exist" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.807048 4865 scope.go:117] "RemoveContainer" containerID="6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b" Jan 23 12:16:15 crc kubenswrapper[4865]: E0123 12:16:15.807299 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b\": container with ID starting with 6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b not found: ID does not exist" containerID="6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.807320 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b"} err="failed to get container status \"6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b\": rpc error: code = NotFound desc = could not find container \"6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b\": container with ID starting with 6687890119cdca5693b045ba10a078b00ea385ed277c6c93e5ee9e54b5c1e88b not found: ID does not exist" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.807334 4865 scope.go:117] "RemoveContainer" containerID="c26f067d3e2f7b3224813d43107ffc59c9cb10c26f18831dfb689b39ad0809f9" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.810159 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.829280 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.849675 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.858933 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:16:15 crc kubenswrapper[4865]: E0123 12:16:15.859333 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29c1861-9f5c-484e-87b9-13f34ea426d5" containerName="init" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859352 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29c1861-9f5c-484e-87b9-13f34ea426d5" containerName="init" Jan 23 12:16:15 crc kubenswrapper[4865]: E0123 12:16:15.859364 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845c3ecd-d398-4524-b2ef-ff90c88fb498" containerName="nova-manage" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859373 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="845c3ecd-d398-4524-b2ef-ff90c88fb498" containerName="nova-manage" Jan 23 12:16:15 crc kubenswrapper[4865]: E0123 12:16:15.859392 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-log" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859399 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-log" Jan 23 12:16:15 crc kubenswrapper[4865]: E0123 12:16:15.859433 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c29c1861-9f5c-484e-87b9-13f34ea426d5" containerName="dnsmasq-dns" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859441 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c29c1861-9f5c-484e-87b9-13f34ea426d5" containerName="dnsmasq-dns" Jan 23 12:16:15 crc kubenswrapper[4865]: E0123 12:16:15.859456 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-metadata" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859464 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-metadata" Jan 23 12:16:15 crc kubenswrapper[4865]: E0123 12:16:15.859482 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b550b10b-74fe-4f27-92c8-011dd04e87e0" containerName="nova-scheduler-scheduler" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859490 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b550b10b-74fe-4f27-92c8-011dd04e87e0" containerName="nova-scheduler-scheduler" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859793 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-log" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859813 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" containerName="nova-metadata-metadata" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859830 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c29c1861-9f5c-484e-87b9-13f34ea426d5" containerName="dnsmasq-dns" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859851 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="845c3ecd-d398-4524-b2ef-ff90c88fb498" containerName="nova-manage" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.859871 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b550b10b-74fe-4f27-92c8-011dd04e87e0" containerName="nova-scheduler-scheduler" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.860870 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.863087 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.863262 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.883514 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.885064 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.888314 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.903343 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.912512 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.981547 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88brw\" (UniqueName: \"kubernetes.io/projected/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-kube-api-access-88brw\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.982433 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-config-data\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.982582 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-logs\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.982683 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:15 crc kubenswrapper[4865]: I0123 12:16:15.982728 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.084120 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-config-data\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.084189 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb5b8f9-2c20-498c-87c9-f53548e98378-config-data\") pod \"nova-scheduler-0\" (UID: \"cfb5b8f9-2c20-498c-87c9-f53548e98378\") " pod="openstack/nova-scheduler-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.084219 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb5b8f9-2c20-498c-87c9-f53548e98378-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfb5b8f9-2c20-498c-87c9-f53548e98378\") " pod="openstack/nova-scheduler-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.084251 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-logs\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.084281 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tp9j\" (UniqueName: \"kubernetes.io/projected/cfb5b8f9-2c20-498c-87c9-f53548e98378-kube-api-access-6tp9j\") pod \"nova-scheduler-0\" (UID: \"cfb5b8f9-2c20-498c-87c9-f53548e98378\") " pod="openstack/nova-scheduler-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.084325 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.084357 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.084448 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88brw\" (UniqueName: \"kubernetes.io/projected/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-kube-api-access-88brw\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.085816 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-logs\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.088994 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.090291 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-config-data\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.092638 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.106427 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88brw\" (UniqueName: \"kubernetes.io/projected/f83b3f2b-567e-4afe-9797-db1aa2bdadaa-kube-api-access-88brw\") pod \"nova-metadata-0\" (UID: \"f83b3f2b-567e-4afe-9797-db1aa2bdadaa\") " pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.133297 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b550b10b-74fe-4f27-92c8-011dd04e87e0" path="/var/lib/kubelet/pods/b550b10b-74fe-4f27-92c8-011dd04e87e0/volumes" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.134118 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b1b266-e09f-40b7-800d-d95db2ad4632" path="/var/lib/kubelet/pods/c4b1b266-e09f-40b7-800d-d95db2ad4632/volumes" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.177084 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.186452 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb5b8f9-2c20-498c-87c9-f53548e98378-config-data\") pod \"nova-scheduler-0\" (UID: \"cfb5b8f9-2c20-498c-87c9-f53548e98378\") " pod="openstack/nova-scheduler-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.186527 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb5b8f9-2c20-498c-87c9-f53548e98378-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfb5b8f9-2c20-498c-87c9-f53548e98378\") " pod="openstack/nova-scheduler-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.186576 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tp9j\" (UniqueName: \"kubernetes.io/projected/cfb5b8f9-2c20-498c-87c9-f53548e98378-kube-api-access-6tp9j\") pod \"nova-scheduler-0\" (UID: \"cfb5b8f9-2c20-498c-87c9-f53548e98378\") " pod="openstack/nova-scheduler-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.192035 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb5b8f9-2c20-498c-87c9-f53548e98378-config-data\") pod \"nova-scheduler-0\" (UID: \"cfb5b8f9-2c20-498c-87c9-f53548e98378\") " pod="openstack/nova-scheduler-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.195458 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb5b8f9-2c20-498c-87c9-f53548e98378-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cfb5b8f9-2c20-498c-87c9-f53548e98378\") " pod="openstack/nova-scheduler-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.202517 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tp9j\" (UniqueName: \"kubernetes.io/projected/cfb5b8f9-2c20-498c-87c9-f53548e98378-kube-api-access-6tp9j\") pod \"nova-scheduler-0\" (UID: \"cfb5b8f9-2c20-498c-87c9-f53548e98378\") " pod="openstack/nova-scheduler-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.205855 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.709581 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.735413 4865 generic.go:334] "Generic (PLEG): container finished" podID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerID="f6e5649bdde44cc0592947a667131512767e4904a68845d43019f11b670480dc" exitCode=0 Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.735516 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cef5590a-c39c-4217-a5ca-14e2deb926b3","Type":"ContainerDied","Data":"f6e5649bdde44cc0592947a667131512767e4904a68845d43019f11b670480dc"} Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.737372 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f83b3f2b-567e-4afe-9797-db1aa2bdadaa","Type":"ContainerStarted","Data":"fba87f9f4ab8f5dce6219a5ad81abdb35f203151e0aa3cc373995df6dc4b9ad3"} Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.770562 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 23 12:16:16 crc kubenswrapper[4865]: W0123 12:16:16.786350 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfb5b8f9_2c20_498c_87c9_f53548e98378.slice/crio-fe5e4ea96248545827e2059121636cd389ed7f28797356701c5ea1e98950ddad WatchSource:0}: Error finding container fe5e4ea96248545827e2059121636cd389ed7f28797356701c5ea1e98950ddad: Status 404 returned error can't find the container with id fe5e4ea96248545827e2059121636cd389ed7f28797356701c5ea1e98950ddad Jan 23 12:16:16 crc kubenswrapper[4865]: I0123 12:16:16.925731 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.012069 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-config-data\") pod \"cef5590a-c39c-4217-a5ca-14e2deb926b3\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.012362 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef5590a-c39c-4217-a5ca-14e2deb926b3-logs\") pod \"cef5590a-c39c-4217-a5ca-14e2deb926b3\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.012394 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-internal-tls-certs\") pod \"cef5590a-c39c-4217-a5ca-14e2deb926b3\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.012421 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-public-tls-certs\") pod \"cef5590a-c39c-4217-a5ca-14e2deb926b3\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.012458 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hjkn\" (UniqueName: \"kubernetes.io/projected/cef5590a-c39c-4217-a5ca-14e2deb926b3-kube-api-access-2hjkn\") pod \"cef5590a-c39c-4217-a5ca-14e2deb926b3\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.012489 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-combined-ca-bundle\") pod \"cef5590a-c39c-4217-a5ca-14e2deb926b3\" (UID: \"cef5590a-c39c-4217-a5ca-14e2deb926b3\") " Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.013160 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef5590a-c39c-4217-a5ca-14e2deb926b3-logs" (OuterVolumeSpecName: "logs") pod "cef5590a-c39c-4217-a5ca-14e2deb926b3" (UID: "cef5590a-c39c-4217-a5ca-14e2deb926b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.019325 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef5590a-c39c-4217-a5ca-14e2deb926b3-kube-api-access-2hjkn" (OuterVolumeSpecName: "kube-api-access-2hjkn") pod "cef5590a-c39c-4217-a5ca-14e2deb926b3" (UID: "cef5590a-c39c-4217-a5ca-14e2deb926b3"). InnerVolumeSpecName "kube-api-access-2hjkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.060363 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cef5590a-c39c-4217-a5ca-14e2deb926b3" (UID: "cef5590a-c39c-4217-a5ca-14e2deb926b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.064484 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-config-data" (OuterVolumeSpecName: "config-data") pod "cef5590a-c39c-4217-a5ca-14e2deb926b3" (UID: "cef5590a-c39c-4217-a5ca-14e2deb926b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.092753 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cef5590a-c39c-4217-a5ca-14e2deb926b3" (UID: "cef5590a-c39c-4217-a5ca-14e2deb926b3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.102135 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cef5590a-c39c-4217-a5ca-14e2deb926b3" (UID: "cef5590a-c39c-4217-a5ca-14e2deb926b3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.114899 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hjkn\" (UniqueName: \"kubernetes.io/projected/cef5590a-c39c-4217-a5ca-14e2deb926b3-kube-api-access-2hjkn\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.114945 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.114955 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.114967 4865 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cef5590a-c39c-4217-a5ca-14e2deb926b3-logs\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.114976 4865 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.114985 4865 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cef5590a-c39c-4217-a5ca-14e2deb926b3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.750188 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cef5590a-c39c-4217-a5ca-14e2deb926b3","Type":"ContainerDied","Data":"2d7518abe9399e7021d20842b3097e0f561017ccebc8ff59f852ea7447fe81de"} Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.750246 4865 scope.go:117] "RemoveContainer" containerID="f6e5649bdde44cc0592947a667131512767e4904a68845d43019f11b670480dc" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.750256 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.755139 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfb5b8f9-2c20-498c-87c9-f53548e98378","Type":"ContainerStarted","Data":"645c51d065b2e040cbc4162a8d768847e8e43d4ba2733636fee4cbf985ed7691"} Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.755180 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cfb5b8f9-2c20-498c-87c9-f53548e98378","Type":"ContainerStarted","Data":"fe5e4ea96248545827e2059121636cd389ed7f28797356701c5ea1e98950ddad"} Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.763966 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f83b3f2b-567e-4afe-9797-db1aa2bdadaa","Type":"ContainerStarted","Data":"aec1160afb5574688221c4f24679b56a4ee2754d30dea7af88d99e5b1c8a2690"} Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.764038 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f83b3f2b-567e-4afe-9797-db1aa2bdadaa","Type":"ContainerStarted","Data":"105eeec064c3471c24144b5b06b87f97c929a023ebccec0305f4b5356407f4e7"} Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.779919 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7798997659999998 podStartE2EDuration="2.779899766s" podCreationTimestamp="2026-01-23 12:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:16:17.773241755 +0000 UTC m=+1421.942314021" watchObservedRunningTime="2026-01-23 12:16:17.779899766 +0000 UTC m=+1421.948971992" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.783976 4865 scope.go:117] "RemoveContainer" containerID="0e909e43950b6fecddb9331effaed73e78217dffae7430136b36f7f69c14d54f" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.837798 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.859225 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.871829 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 23 12:16:17 crc kubenswrapper[4865]: E0123 12:16:17.873049 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerName="nova-api-log" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.873066 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerName="nova-api-log" Jan 23 12:16:17 crc kubenswrapper[4865]: E0123 12:16:17.873095 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerName="nova-api-api" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.873104 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerName="nova-api-api" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.873350 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerName="nova-api-log" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.873377 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" containerName="nova-api-api" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.873428 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.873403949 podStartE2EDuration="2.873403949s" podCreationTimestamp="2026-01-23 12:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:16:17.843199891 +0000 UTC m=+1422.012272117" watchObservedRunningTime="2026-01-23 12:16:17.873403949 +0000 UTC m=+1422.042476185" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.874705 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.880237 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.880444 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.882146 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 12:16:17 crc kubenswrapper[4865]: I0123 12:16:17.909097 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.033528 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad9795d9-23da-4c83-af4f-cd9ee93afd93-logs\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.033659 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-config-data\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.033755 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-public-tls-certs\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.033804 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.033905 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzmwd\" (UniqueName: \"kubernetes.io/projected/ad9795d9-23da-4c83-af4f-cd9ee93afd93-kube-api-access-zzmwd\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.033944 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.129841 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef5590a-c39c-4217-a5ca-14e2deb926b3" path="/var/lib/kubelet/pods/cef5590a-c39c-4217-a5ca-14e2deb926b3/volumes" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.134764 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-public-tls-certs\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.134825 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.134935 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzmwd\" (UniqueName: \"kubernetes.io/projected/ad9795d9-23da-4c83-af4f-cd9ee93afd93-kube-api-access-zzmwd\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.134965 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.135057 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad9795d9-23da-4c83-af4f-cd9ee93afd93-logs\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.135090 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-config-data\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.135867 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad9795d9-23da-4c83-af4f-cd9ee93afd93-logs\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.142499 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-config-data\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.142725 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-public-tls-certs\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.148089 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.157045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9795d9-23da-4c83-af4f-cd9ee93afd93-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.161180 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzmwd\" (UniqueName: \"kubernetes.io/projected/ad9795d9-23da-4c83-af4f-cd9ee93afd93-kube-api-access-zzmwd\") pod \"nova-api-0\" (UID: \"ad9795d9-23da-4c83-af4f-cd9ee93afd93\") " pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.205510 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.637447 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.776386 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.776640 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:16:18 crc kubenswrapper[4865]: I0123 12:16:18.777271 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad9795d9-23da-4c83-af4f-cd9ee93afd93","Type":"ContainerStarted","Data":"e9587f1790c1a6a2e85657b5fc66a4955d0cad5cfcda90116c4aec2eeae3df36"} Jan 23 12:16:19 crc kubenswrapper[4865]: I0123 12:16:19.789163 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad9795d9-23da-4c83-af4f-cd9ee93afd93","Type":"ContainerStarted","Data":"61c8f74f5838e21aa7a01b3e4afb8bc8827d11500ee6d1b925f91940d39634da"} Jan 23 12:16:19 crc kubenswrapper[4865]: I0123 12:16:19.789211 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ad9795d9-23da-4c83-af4f-cd9ee93afd93","Type":"ContainerStarted","Data":"59f8d0913fbfc9d7bf8a4e3e96618853182abb274c16f2fbb4fda58cc5703733"} Jan 23 12:16:19 crc kubenswrapper[4865]: I0123 12:16:19.813792 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.813763808 podStartE2EDuration="2.813763808s" podCreationTimestamp="2026-01-23 12:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:16:19.80681669 +0000 UTC m=+1423.975888926" watchObservedRunningTime="2026-01-23 12:16:19.813763808 +0000 UTC m=+1423.982836054" Jan 23 12:16:21 crc kubenswrapper[4865]: I0123 12:16:21.177147 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 12:16:21 crc kubenswrapper[4865]: I0123 12:16:21.177790 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 23 12:16:21 crc kubenswrapper[4865]: I0123 12:16:21.206586 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 23 12:16:26 crc kubenswrapper[4865]: I0123 12:16:26.177803 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 12:16:26 crc kubenswrapper[4865]: I0123 12:16:26.178151 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 23 12:16:26 crc kubenswrapper[4865]: I0123 12:16:26.207041 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 23 12:16:26 crc kubenswrapper[4865]: I0123 12:16:26.234349 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 23 12:16:26 crc kubenswrapper[4865]: I0123 12:16:26.878103 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 23 12:16:27 crc kubenswrapper[4865]: I0123 12:16:27.188732 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f83b3f2b-567e-4afe-9797-db1aa2bdadaa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:16:27 crc kubenswrapper[4865]: I0123 12:16:27.189009 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f83b3f2b-567e-4afe-9797-db1aa2bdadaa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:16:28 crc kubenswrapper[4865]: I0123 12:16:28.205905 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 12:16:28 crc kubenswrapper[4865]: I0123 12:16:28.206513 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 23 12:16:29 crc kubenswrapper[4865]: I0123 12:16:29.254786 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ad9795d9-23da-4c83-af4f-cd9ee93afd93" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:16:29 crc kubenswrapper[4865]: I0123 12:16:29.254826 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ad9795d9-23da-4c83-af4f-cd9ee93afd93" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:16:31 crc kubenswrapper[4865]: I0123 12:16:31.206398 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 12:16:34 crc kubenswrapper[4865]: I0123 12:16:34.885628 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 12:16:34 crc kubenswrapper[4865]: I0123 12:16:34.886342 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="813e1f0e-32d5-4237-8722-440164262885" containerName="kube-state-metrics" containerID="cri-o://13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279" gracePeriod=30 Jan 23 12:16:35 crc kubenswrapper[4865]: I0123 12:16:35.820546 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 12:16:35 crc kubenswrapper[4865]: I0123 12:16:35.927987 4865 generic.go:334] "Generic (PLEG): container finished" podID="813e1f0e-32d5-4237-8722-440164262885" containerID="13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279" exitCode=2 Jan 23 12:16:35 crc kubenswrapper[4865]: I0123 12:16:35.928038 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813e1f0e-32d5-4237-8722-440164262885","Type":"ContainerDied","Data":"13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279"} Jan 23 12:16:35 crc kubenswrapper[4865]: I0123 12:16:35.928076 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"813e1f0e-32d5-4237-8722-440164262885","Type":"ContainerDied","Data":"d8f2dab6e275d8d8af18a8c3788f817f71980945ff6700eb4b89cae411f7669e"} Jan 23 12:16:35 crc kubenswrapper[4865]: I0123 12:16:35.928094 4865 scope.go:117] "RemoveContainer" containerID="13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279" Jan 23 12:16:35 crc kubenswrapper[4865]: I0123 12:16:35.928119 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 12:16:35 crc kubenswrapper[4865]: I0123 12:16:35.959523 4865 scope.go:117] "RemoveContainer" containerID="13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279" Jan 23 12:16:35 crc kubenswrapper[4865]: E0123 12:16:35.960310 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279\": container with ID starting with 13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279 not found: ID does not exist" containerID="13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279" Jan 23 12:16:35 crc kubenswrapper[4865]: I0123 12:16:35.960353 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279"} err="failed to get container status \"13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279\": rpc error: code = NotFound desc = could not find container \"13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279\": container with ID starting with 13c6f75f4fac123c9cd4050c373fed51d8a5aa005961bbfdcc41317ff642e279 not found: ID does not exist" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.009303 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v4k6\" (UniqueName: \"kubernetes.io/projected/813e1f0e-32d5-4237-8722-440164262885-kube-api-access-9v4k6\") pod \"813e1f0e-32d5-4237-8722-440164262885\" (UID: \"813e1f0e-32d5-4237-8722-440164262885\") " Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.014211 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813e1f0e-32d5-4237-8722-440164262885-kube-api-access-9v4k6" (OuterVolumeSpecName: "kube-api-access-9v4k6") pod "813e1f0e-32d5-4237-8722-440164262885" (UID: "813e1f0e-32d5-4237-8722-440164262885"). InnerVolumeSpecName "kube-api-access-9v4k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.111839 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v4k6\" (UniqueName: \"kubernetes.io/projected/813e1f0e-32d5-4237-8722-440164262885-kube-api-access-9v4k6\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.184074 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.193052 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.196708 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.259348 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.274154 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.293622 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 12:16:36 crc kubenswrapper[4865]: E0123 12:16:36.294049 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813e1f0e-32d5-4237-8722-440164262885" containerName="kube-state-metrics" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.294066 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="813e1f0e-32d5-4237-8722-440164262885" containerName="kube-state-metrics" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.294260 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="813e1f0e-32d5-4237-8722-440164262885" containerName="kube-state-metrics" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.294943 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.298109 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.298260 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.303215 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.417003 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb0a89a-49f9-4a31-9cec-669e88882018-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.417069 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzcph\" (UniqueName: \"kubernetes.io/projected/4cb0a89a-49f9-4a31-9cec-669e88882018-kube-api-access-jzcph\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.417898 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb0a89a-49f9-4a31-9cec-669e88882018-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.417939 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4cb0a89a-49f9-4a31-9cec-669e88882018-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.519796 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb0a89a-49f9-4a31-9cec-669e88882018-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.519843 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4cb0a89a-49f9-4a31-9cec-669e88882018-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.519980 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb0a89a-49f9-4a31-9cec-669e88882018-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.520008 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzcph\" (UniqueName: \"kubernetes.io/projected/4cb0a89a-49f9-4a31-9cec-669e88882018-kube-api-access-jzcph\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.525570 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4cb0a89a-49f9-4a31-9cec-669e88882018-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.525667 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4cb0a89a-49f9-4a31-9cec-669e88882018-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.529063 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4cb0a89a-49f9-4a31-9cec-669e88882018-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.540671 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzcph\" (UniqueName: \"kubernetes.io/projected/4cb0a89a-49f9-4a31-9cec-669e88882018-kube-api-access-jzcph\") pod \"kube-state-metrics-0\" (UID: \"4cb0a89a-49f9-4a31-9cec-669e88882018\") " pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.628809 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 23 12:16:36 crc kubenswrapper[4865]: I0123 12:16:36.941787 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.089316 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.490534 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.492046 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="ceilometer-central-agent" containerID="cri-o://8492ea2fbac475cc90eb07dee3db0e8cb5199f2e11f57794bebb09cabd933c19" gracePeriod=30 Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.492322 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="proxy-httpd" containerID="cri-o://14c4874b153f143e6ac61f8a7e7840aca7d6e6d8bd8521f39a0d633f5456ac3f" gracePeriod=30 Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.492507 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="sg-core" containerID="cri-o://1170bc7a74c1938851ffbe7a8ec31fda85fcc09f6471d22fca7063a9167e7163" gracePeriod=30 Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.492579 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="ceilometer-notification-agent" containerID="cri-o://5b372db41521c1d71334cef8fd8bbd524b55f496ffbd8f1a7288be8473a57011" gracePeriod=30 Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.946338 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cb0a89a-49f9-4a31-9cec-669e88882018","Type":"ContainerStarted","Data":"43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5"} Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.947583 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cb0a89a-49f9-4a31-9cec-669e88882018","Type":"ContainerStarted","Data":"ff9eedf98e8e98f275711242f598c6bb0f240fbcd5b434811a8b32cc214fb2d5"} Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.947719 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.949487 4865 generic.go:334] "Generic (PLEG): container finished" podID="eb56a942-823d-4d71-a224-cf157da9d100" containerID="14c4874b153f143e6ac61f8a7e7840aca7d6e6d8bd8521f39a0d633f5456ac3f" exitCode=0 Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.949585 4865 generic.go:334] "Generic (PLEG): container finished" podID="eb56a942-823d-4d71-a224-cf157da9d100" containerID="1170bc7a74c1938851ffbe7a8ec31fda85fcc09f6471d22fca7063a9167e7163" exitCode=2 Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.949748 4865 generic.go:334] "Generic (PLEG): container finished" podID="eb56a942-823d-4d71-a224-cf157da9d100" containerID="8492ea2fbac475cc90eb07dee3db0e8cb5199f2e11f57794bebb09cabd933c19" exitCode=0 Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.949565 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb56a942-823d-4d71-a224-cf157da9d100","Type":"ContainerDied","Data":"14c4874b153f143e6ac61f8a7e7840aca7d6e6d8bd8521f39a0d633f5456ac3f"} Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.949923 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb56a942-823d-4d71-a224-cf157da9d100","Type":"ContainerDied","Data":"1170bc7a74c1938851ffbe7a8ec31fda85fcc09f6471d22fca7063a9167e7163"} Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.949947 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb56a942-823d-4d71-a224-cf157da9d100","Type":"ContainerDied","Data":"8492ea2fbac475cc90eb07dee3db0e8cb5199f2e11f57794bebb09cabd933c19"} Jan 23 12:16:37 crc kubenswrapper[4865]: I0123 12:16:37.970586 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.576191413 podStartE2EDuration="1.970570566s" podCreationTimestamp="2026-01-23 12:16:36 +0000 UTC" firstStartedPulling="2026-01-23 12:16:37.102822595 +0000 UTC m=+1441.271894821" lastFinishedPulling="2026-01-23 12:16:37.497201728 +0000 UTC m=+1441.666273974" observedRunningTime="2026-01-23 12:16:37.9636919 +0000 UTC m=+1442.132764116" watchObservedRunningTime="2026-01-23 12:16:37.970570566 +0000 UTC m=+1442.139642782" Jan 23 12:16:38 crc kubenswrapper[4865]: I0123 12:16:38.127544 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813e1f0e-32d5-4237-8722-440164262885" path="/var/lib/kubelet/pods/813e1f0e-32d5-4237-8722-440164262885/volumes" Jan 23 12:16:38 crc kubenswrapper[4865]: I0123 12:16:38.212410 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 12:16:38 crc kubenswrapper[4865]: I0123 12:16:38.213754 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 12:16:38 crc kubenswrapper[4865]: I0123 12:16:38.218259 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 12:16:38 crc kubenswrapper[4865]: I0123 12:16:38.220861 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 23 12:16:38 crc kubenswrapper[4865]: I0123 12:16:38.958455 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 23 12:16:38 crc kubenswrapper[4865]: I0123 12:16:38.965127 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 23 12:16:40 crc kubenswrapper[4865]: I0123 12:16:40.978099 4865 generic.go:334] "Generic (PLEG): container finished" podID="eb56a942-823d-4d71-a224-cf157da9d100" containerID="5b372db41521c1d71334cef8fd8bbd524b55f496ffbd8f1a7288be8473a57011" exitCode=0 Jan 23 12:16:40 crc kubenswrapper[4865]: I0123 12:16:40.978276 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb56a942-823d-4d71-a224-cf157da9d100","Type":"ContainerDied","Data":"5b372db41521c1d71334cef8fd8bbd524b55f496ffbd8f1a7288be8473a57011"} Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.055577 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.202985 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-scripts\") pod \"eb56a942-823d-4d71-a224-cf157da9d100\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.203255 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-sg-core-conf-yaml\") pod \"eb56a942-823d-4d71-a224-cf157da9d100\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.203377 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvwn8\" (UniqueName: \"kubernetes.io/projected/eb56a942-823d-4d71-a224-cf157da9d100-kube-api-access-cvwn8\") pod \"eb56a942-823d-4d71-a224-cf157da9d100\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.203532 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-run-httpd\") pod \"eb56a942-823d-4d71-a224-cf157da9d100\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.203674 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-log-httpd\") pod \"eb56a942-823d-4d71-a224-cf157da9d100\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.203750 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-config-data\") pod \"eb56a942-823d-4d71-a224-cf157da9d100\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.203878 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-combined-ca-bundle\") pod \"eb56a942-823d-4d71-a224-cf157da9d100\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.203776 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eb56a942-823d-4d71-a224-cf157da9d100" (UID: "eb56a942-823d-4d71-a224-cf157da9d100"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.204386 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eb56a942-823d-4d71-a224-cf157da9d100" (UID: "eb56a942-823d-4d71-a224-cf157da9d100"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.204404 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.209345 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-scripts" (OuterVolumeSpecName: "scripts") pod "eb56a942-823d-4d71-a224-cf157da9d100" (UID: "eb56a942-823d-4d71-a224-cf157da9d100"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.209403 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb56a942-823d-4d71-a224-cf157da9d100-kube-api-access-cvwn8" (OuterVolumeSpecName: "kube-api-access-cvwn8") pod "eb56a942-823d-4d71-a224-cf157da9d100" (UID: "eb56a942-823d-4d71-a224-cf157da9d100"). InnerVolumeSpecName "kube-api-access-cvwn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.232088 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eb56a942-823d-4d71-a224-cf157da9d100" (UID: "eb56a942-823d-4d71-a224-cf157da9d100"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.277560 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb56a942-823d-4d71-a224-cf157da9d100" (UID: "eb56a942-823d-4d71-a224-cf157da9d100"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.304804 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-config-data" (OuterVolumeSpecName: "config-data") pod "eb56a942-823d-4d71-a224-cf157da9d100" (UID: "eb56a942-823d-4d71-a224-cf157da9d100"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.305533 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-config-data\") pod \"eb56a942-823d-4d71-a224-cf157da9d100\" (UID: \"eb56a942-823d-4d71-a224-cf157da9d100\") " Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.306449 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb56a942-823d-4d71-a224-cf157da9d100-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.307437 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.307582 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.307691 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.307800 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvwn8\" (UniqueName: \"kubernetes.io/projected/eb56a942-823d-4d71-a224-cf157da9d100-kube-api-access-cvwn8\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:41 crc kubenswrapper[4865]: W0123 12:16:41.305751 4865 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/eb56a942-823d-4d71-a224-cf157da9d100/volumes/kubernetes.io~secret/config-data Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.307981 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-config-data" (OuterVolumeSpecName: "config-data") pod "eb56a942-823d-4d71-a224-cf157da9d100" (UID: "eb56a942-823d-4d71-a224-cf157da9d100"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.410031 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb56a942-823d-4d71-a224-cf157da9d100-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.990871 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eb56a942-823d-4d71-a224-cf157da9d100","Type":"ContainerDied","Data":"273a8dd7def25d477014b20f33010c01ca8d8219e1c4629c821a051519f8fc87"} Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.990933 4865 scope.go:117] "RemoveContainer" containerID="14c4874b153f143e6ac61f8a7e7840aca7d6e6d8bd8521f39a0d633f5456ac3f" Jan 23 12:16:41 crc kubenswrapper[4865]: I0123 12:16:41.991091 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.040380 4865 scope.go:117] "RemoveContainer" containerID="1170bc7a74c1938851ffbe7a8ec31fda85fcc09f6471d22fca7063a9167e7163" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.046572 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.059484 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.077691 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:42 crc kubenswrapper[4865]: E0123 12:16:42.078517 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="ceilometer-notification-agent" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.078636 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="ceilometer-notification-agent" Jan 23 12:16:42 crc kubenswrapper[4865]: E0123 12:16:42.078714 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="proxy-httpd" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.078770 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="proxy-httpd" Jan 23 12:16:42 crc kubenswrapper[4865]: E0123 12:16:42.078836 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="ceilometer-central-agent" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.078887 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="ceilometer-central-agent" Jan 23 12:16:42 crc kubenswrapper[4865]: E0123 12:16:42.079013 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="sg-core" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.079066 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="sg-core" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.079416 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="ceilometer-central-agent" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.079529 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="sg-core" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.079659 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="proxy-httpd" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.079961 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb56a942-823d-4d71-a224-cf157da9d100" containerName="ceilometer-notification-agent" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.081900 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.087068 4865 scope.go:117] "RemoveContainer" containerID="5b372db41521c1d71334cef8fd8bbd524b55f496ffbd8f1a7288be8473a57011" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.087357 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.087352 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.087622 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.115912 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.128511 4865 scope.go:117] "RemoveContainer" containerID="8492ea2fbac475cc90eb07dee3db0e8cb5199f2e11f57794bebb09cabd933c19" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.136532 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb56a942-823d-4d71-a224-cf157da9d100" path="/var/lib/kubelet/pods/eb56a942-823d-4d71-a224-cf157da9d100/volumes" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.141993 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.142313 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-log-httpd\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.142512 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.142636 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.142787 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2r7\" (UniqueName: \"kubernetes.io/projected/c63db198-8ec8-42b1-8211-d207c172706c-kube-api-access-gn2r7\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.143016 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-config-data\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.143066 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-scripts\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.143106 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-run-httpd\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.247327 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2r7\" (UniqueName: \"kubernetes.io/projected/c63db198-8ec8-42b1-8211-d207c172706c-kube-api-access-gn2r7\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.247434 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-config-data\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.247485 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-scripts\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.247541 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-run-httpd\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.247641 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.247681 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-log-httpd\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.247792 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.247853 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.253389 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.260408 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-config-data\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.265400 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-scripts\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.265783 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-run-httpd\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.268469 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-log-httpd\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.268957 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.287584 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2r7\" (UniqueName: \"kubernetes.io/projected/c63db198-8ec8-42b1-8211-d207c172706c-kube-api-access-gn2r7\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.296096 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.407960 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 12:16:42 crc kubenswrapper[4865]: I0123 12:16:42.876356 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 12:16:43 crc kubenswrapper[4865]: I0123 12:16:43.003147 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerStarted","Data":"65100cdd4df99842af0b84774c42260f5499775a796fc462445a64e82467fe4b"} Jan 23 12:16:44 crc kubenswrapper[4865]: I0123 12:16:44.016507 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerStarted","Data":"477c52a280a948025e13a8feda92fd8b614f305fe95984ffd0b1b5641cedd357"} Jan 23 12:16:44 crc kubenswrapper[4865]: I0123 12:16:44.017075 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerStarted","Data":"0eaf574272189462b7fced379e7c324bec419a8b14cb558060d9466a414d00db"} Jan 23 12:16:45 crc kubenswrapper[4865]: I0123 12:16:45.031782 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerStarted","Data":"33c46b0735e5af2c1e7d667d610ff52f526dbbc2cc720a72f733806394f3c237"} Jan 23 12:16:47 crc kubenswrapper[4865]: I0123 12:16:47.266832 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:16:47 crc kubenswrapper[4865]: I0123 12:16:47.902758 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:16:47 crc kubenswrapper[4865]: I0123 12:16:47.902799 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:16:48 crc kubenswrapper[4865]: I0123 12:16:48.340792 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:16:48 crc kubenswrapper[4865]: I0123 12:16:48.493333 4865 scope.go:117] "RemoveContainer" containerID="c914ceab4d869f7b580f1846489fae378d704b7e450b541c717397b7894f4daf" Jan 23 12:16:48 crc kubenswrapper[4865]: I0123 12:16:48.776744 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:16:48 crc kubenswrapper[4865]: I0123 12:16:48.776832 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:16:50 crc kubenswrapper[4865]: I0123 12:16:50.303072 4865 scope.go:117] "RemoveContainer" containerID="e53270601ed6c20b37cb9365a03027c97883ff794d9e021e00cbf472e29191b5" Jan 23 12:16:50 crc kubenswrapper[4865]: I0123 12:16:50.468482 4865 scope.go:117] "RemoveContainer" containerID="6efb8ee2a01774c83baf5e88dcf56820583bb17b76d503cff132ff1fba71eb2a" Jan 23 12:16:50 crc kubenswrapper[4865]: I0123 12:16:50.499935 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 12:16:50 crc kubenswrapper[4865]: I0123 12:16:50.553819 4865 scope.go:117] "RemoveContainer" containerID="53821fb832774694ea5785f0987661c3ae10cebcd2820bdbf3fd05d5715af480" Jan 23 12:16:51 crc kubenswrapper[4865]: I0123 12:16:51.102511 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerStarted","Data":"9b6e695de5e717162ef62eac2c551888f78a635db078d49284199b6bcfb3030d"} Jan 23 12:16:51 crc kubenswrapper[4865]: I0123 12:16:51.102860 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 12:16:51 crc kubenswrapper[4865]: I0123 12:16:51.131871 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.607528802 podStartE2EDuration="9.131847932s" podCreationTimestamp="2026-01-23 12:16:42 +0000 UTC" firstStartedPulling="2026-01-23 12:16:42.896856218 +0000 UTC m=+1447.065928444" lastFinishedPulling="2026-01-23 12:16:45.421175348 +0000 UTC m=+1449.590247574" observedRunningTime="2026-01-23 12:16:51.125409437 +0000 UTC m=+1455.294481663" watchObservedRunningTime="2026-01-23 12:16:51.131847932 +0000 UTC m=+1455.300920158" Jan 23 12:17:12 crc kubenswrapper[4865]: I0123 12:17:12.417315 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 12:17:18 crc kubenswrapper[4865]: I0123 12:17:18.776816 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:17:18 crc kubenswrapper[4865]: I0123 12:17:18.777324 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:17:18 crc kubenswrapper[4865]: I0123 12:17:18.777363 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:17:18 crc kubenswrapper[4865]: I0123 12:17:18.778108 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f27c5e1f3f822d3f73db149902949b4aa2098b5ef3e947246d94e8825258d08b"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:17:18 crc kubenswrapper[4865]: I0123 12:17:18.778156 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://f27c5e1f3f822d3f73db149902949b4aa2098b5ef3e947246d94e8825258d08b" gracePeriod=600 Jan 23 12:17:19 crc kubenswrapper[4865]: I0123 12:17:19.392766 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="f27c5e1f3f822d3f73db149902949b4aa2098b5ef3e947246d94e8825258d08b" exitCode=0 Jan 23 12:17:19 crc kubenswrapper[4865]: I0123 12:17:19.392863 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"f27c5e1f3f822d3f73db149902949b4aa2098b5ef3e947246d94e8825258d08b"} Jan 23 12:17:19 crc kubenswrapper[4865]: I0123 12:17:19.393227 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725"} Jan 23 12:17:19 crc kubenswrapper[4865]: I0123 12:17:19.393254 4865 scope.go:117] "RemoveContainer" containerID="e8f1e0b3d016dae118cc529905287d1f5d83d908d73deab63599d7b4262f2021" Jan 23 12:17:22 crc kubenswrapper[4865]: I0123 12:17:22.659830 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 12:17:23 crc kubenswrapper[4865]: I0123 12:17:23.611736 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 12:17:27 crc kubenswrapper[4865]: I0123 12:17:27.890947 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerName="rabbitmq" containerID="cri-o://5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813" gracePeriod=604796 Jan 23 12:17:28 crc kubenswrapper[4865]: I0123 12:17:28.089568 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerName="rabbitmq" containerID="cri-o://25508c7568020837db3aff1bf4699e4717036225593f03472819e46d410c7752" gracePeriod=604795 Jan 23 12:17:28 crc kubenswrapper[4865]: I0123 12:17:28.484523 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 23 12:17:28 crc kubenswrapper[4865]: I0123 12:17:28.812124 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.496677 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.538671 4865 generic.go:334] "Generic (PLEG): container finished" podID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerID="25508c7568020837db3aff1bf4699e4717036225593f03472819e46d410c7752" exitCode=0 Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.538754 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10a07490-f361-43e5-8d3e-a8bd917b3b84","Type":"ContainerDied","Data":"25508c7568020837db3aff1bf4699e4717036225593f03472819e46d410c7752"} Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.541189 4865 generic.go:334] "Generic (PLEG): container finished" podID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerID="5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813" exitCode=0 Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.541218 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ebb7983c-3aed-42f5-8635-8188f7abb9d5","Type":"ContainerDied","Data":"5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813"} Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.541245 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ebb7983c-3aed-42f5-8635-8188f7abb9d5","Type":"ContainerDied","Data":"0c65d0447f85cb26b3cf25ce7b89d5bc5136452955764b64393bc357676606dd"} Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.541265 4865 scope.go:117] "RemoveContainer" containerID="5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.541414 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.598296 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9p8c\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-kube-api-access-r9p8c\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.598545 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-tls\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.598707 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-config-data\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.605894 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ebb7983c-3aed-42f5-8635-8188f7abb9d5-erlang-cookie-secret\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.606159 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-plugins\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.606311 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-erlang-cookie\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.606434 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-plugins-conf\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.606587 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-server-conf\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.606827 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.606928 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-confd\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.607031 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ebb7983c-3aed-42f5-8635-8188f7abb9d5-pod-info\") pod \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\" (UID: \"ebb7983c-3aed-42f5-8635-8188f7abb9d5\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.610192 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.621098 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/ebb7983c-3aed-42f5-8635-8188f7abb9d5-pod-info" (OuterVolumeSpecName: "pod-info") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.622316 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.622768 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.623656 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.649033 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.654650 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-kube-api-access-r9p8c" (OuterVolumeSpecName: "kube-api-access-r9p8c") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "kube-api-access-r9p8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.654773 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb7983c-3aed-42f5-8635-8188f7abb9d5-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.657088 4865 scope.go:117] "RemoveContainer" containerID="fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.680447 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-config-data" (OuterVolumeSpecName: "config-data") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.715259 4865 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.715308 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.715319 4865 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ebb7983c-3aed-42f5-8635-8188f7abb9d5-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.715329 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9p8c\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-kube-api-access-r9p8c\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.715339 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.715347 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.715355 4865 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ebb7983c-3aed-42f5-8635-8188f7abb9d5-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.715363 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.715371 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.720170 4865 scope.go:117] "RemoveContainer" containerID="5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813" Jan 23 12:17:34 crc kubenswrapper[4865]: E0123 12:17:34.720490 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813\": container with ID starting with 5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813 not found: ID does not exist" containerID="5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.720538 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813"} err="failed to get container status \"5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813\": rpc error: code = NotFound desc = could not find container \"5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813\": container with ID starting with 5d35b8ea1221617e50e1233ea8ee36c65d05254e9815d01b994d48d914d39813 not found: ID does not exist" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.720570 4865 scope.go:117] "RemoveContainer" containerID="fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8" Jan 23 12:17:34 crc kubenswrapper[4865]: E0123 12:17:34.721119 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8\": container with ID starting with fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8 not found: ID does not exist" containerID="fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.721156 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8"} err="failed to get container status \"fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8\": rpc error: code = NotFound desc = could not find container \"fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8\": container with ID starting with fe9e5ea611f2b63e2e08e8fea6bfb1afbacd1402c29a26849582d14d630918e8 not found: ID does not exist" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.722144 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.766024 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.771277 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-server-conf" (OuterVolumeSpecName: "server-conf") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.816782 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10a07490-f361-43e5-8d3e-a8bd917b3b84-erlang-cookie-secret\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.816834 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-tls\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.816870 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-config-data\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.816891 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-server-conf\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.816912 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-plugins\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.816925 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-plugins-conf\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.816965 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqtwc\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-kube-api-access-jqtwc\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.817027 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.817104 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10a07490-f361-43e5-8d3e-a8bd917b3b84-pod-info\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.817122 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-confd\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.817146 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-erlang-cookie\") pod \"10a07490-f361-43e5-8d3e-a8bd917b3b84\" (UID: \"10a07490-f361-43e5-8d3e-a8bd917b3b84\") " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.817568 4865 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ebb7983c-3aed-42f5-8635-8188f7abb9d5-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.817589 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.820373 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.820473 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.821331 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.833326 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.837472 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-kube-api-access-jqtwc" (OuterVolumeSpecName: "kube-api-access-jqtwc") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "kube-api-access-jqtwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.840749 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.846711 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/10a07490-f361-43e5-8d3e-a8bd917b3b84-pod-info" (OuterVolumeSpecName: "pod-info") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.848305 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10a07490-f361-43e5-8d3e-a8bd917b3b84-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.861122 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "ebb7983c-3aed-42f5-8635-8188f7abb9d5" (UID: "ebb7983c-3aed-42f5-8635-8188f7abb9d5"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.883539 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-config-data" (OuterVolumeSpecName: "config-data") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.922658 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqtwc\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-kube-api-access-jqtwc\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.922708 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.922721 4865 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/10a07490-f361-43e5-8d3e-a8bd917b3b84-pod-info\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.922730 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.922738 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ebb7983c-3aed-42f5-8635-8188f7abb9d5-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.922747 4865 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/10a07490-f361-43e5-8d3e-a8bd917b3b84-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.922757 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.922767 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.922775 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.922783 4865 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.926764 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-server-conf" (OuterVolumeSpecName: "server-conf") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.949384 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 23 12:17:34 crc kubenswrapper[4865]: I0123 12:17:34.964900 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "10a07490-f361-43e5-8d3e-a8bd917b3b84" (UID: "10a07490-f361-43e5-8d3e-a8bd917b3b84"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.025039 4865 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/10a07490-f361-43e5-8d3e-a8bd917b3b84-server-conf\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.025260 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.025351 4865 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/10a07490-f361-43e5-8d3e-a8bd917b3b84-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.202025 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.210095 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.233547 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 12:17:35 crc kubenswrapper[4865]: E0123 12:17:35.234145 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerName="setup-container" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.234212 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerName="setup-container" Jan 23 12:17:35 crc kubenswrapper[4865]: E0123 12:17:35.234279 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerName="setup-container" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.234327 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerName="setup-container" Jan 23 12:17:35 crc kubenswrapper[4865]: E0123 12:17:35.234379 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerName="rabbitmq" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.234435 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerName="rabbitmq" Jan 23 12:17:35 crc kubenswrapper[4865]: E0123 12:17:35.234498 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerName="rabbitmq" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.234545 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerName="rabbitmq" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.234774 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" containerName="rabbitmq" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.234876 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" containerName="rabbitmq" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.238097 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.241003 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.241161 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.241269 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.241377 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.241501 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.241628 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rrxrn" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.250026 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.260445 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.337997 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.338070 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.338118 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.338157 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b2b3256c-585c-4ce5-9f99-400086a0117e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.338185 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p4qb\" (UniqueName: \"kubernetes.io/projected/b2b3256c-585c-4ce5-9f99-400086a0117e-kube-api-access-6p4qb\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.338227 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b2b3256c-585c-4ce5-9f99-400086a0117e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.338249 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2b3256c-585c-4ce5-9f99-400086a0117e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.338273 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b2b3256c-585c-4ce5-9f99-400086a0117e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.338302 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b2b3256c-585c-4ce5-9f99-400086a0117e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.338397 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.338458 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.439740 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b2b3256c-585c-4ce5-9f99-400086a0117e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.439793 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2b3256c-585c-4ce5-9f99-400086a0117e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.439816 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b2b3256c-585c-4ce5-9f99-400086a0117e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.439837 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b2b3256c-585c-4ce5-9f99-400086a0117e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.439900 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.439944 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.439991 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.440012 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.440044 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.440073 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b2b3256c-585c-4ce5-9f99-400086a0117e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.440094 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p4qb\" (UniqueName: \"kubernetes.io/projected/b2b3256c-585c-4ce5-9f99-400086a0117e-kube-api-access-6p4qb\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.440842 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.441756 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b2b3256c-585c-4ce5-9f99-400086a0117e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.441939 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b2b3256c-585c-4ce5-9f99-400086a0117e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.442133 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.442234 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.442540 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b2b3256c-585c-4ce5-9f99-400086a0117e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.447255 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b2b3256c-585c-4ce5-9f99-400086a0117e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.447869 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.454077 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b2b3256c-585c-4ce5-9f99-400086a0117e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.458617 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b2b3256c-585c-4ce5-9f99-400086a0117e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.474781 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.475697 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p4qb\" (UniqueName: \"kubernetes.io/projected/b2b3256c-585c-4ce5-9f99-400086a0117e-kube-api-access-6p4qb\") pod \"rabbitmq-cell1-server-0\" (UID: \"b2b3256c-585c-4ce5-9f99-400086a0117e\") " pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.551704 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"10a07490-f361-43e5-8d3e-a8bd917b3b84","Type":"ContainerDied","Data":"675b08be31113371ae4c652512134af5515c4a0355844c312dbd7907ec942bee"} Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.551763 4865 scope.go:117] "RemoveContainer" containerID="25508c7568020837db3aff1bf4699e4717036225593f03472819e46d410c7752" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.551853 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.555624 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.597206 4865 scope.go:117] "RemoveContainer" containerID="5e4cca6ecb16f4a92d5899f94604c447d30034fd3d17d308d5dacb49f13a795c" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.611022 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.629653 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.648803 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.653814 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.655853 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.656054 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.656934 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.657056 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.664407 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k2tck" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.665406 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.673127 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.677893 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.745052 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.745261 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bfb2126d-4cee-451f-8867-6f098453ef37-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.745349 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bfb2126d-4cee-451f-8867-6f098453ef37-config-data\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.745449 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.745530 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v9hl\" (UniqueName: \"kubernetes.io/projected/bfb2126d-4cee-451f-8867-6f098453ef37-kube-api-access-4v9hl\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.745634 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bfb2126d-4cee-451f-8867-6f098453ef37-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.745818 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.745934 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bfb2126d-4cee-451f-8867-6f098453ef37-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.746018 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.746117 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.746217 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bfb2126d-4cee-451f-8867-6f098453ef37-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.847825 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848181 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848350 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bfb2126d-4cee-451f-8867-6f098453ef37-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848383 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848419 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848450 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bfb2126d-4cee-451f-8867-6f098453ef37-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848485 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848511 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bfb2126d-4cee-451f-8867-6f098453ef37-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848532 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bfb2126d-4cee-451f-8867-6f098453ef37-config-data\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848563 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848580 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v9hl\" (UniqueName: \"kubernetes.io/projected/bfb2126d-4cee-451f-8867-6f098453ef37-kube-api-access-4v9hl\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.848767 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bfb2126d-4cee-451f-8867-6f098453ef37-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.852657 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.853391 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.854046 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/bfb2126d-4cee-451f-8867-6f098453ef37-server-conf\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.854097 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/bfb2126d-4cee-451f-8867-6f098453ef37-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.861179 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/bfb2126d-4cee-451f-8867-6f098453ef37-pod-info\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.865590 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.866098 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/bfb2126d-4cee-451f-8867-6f098453ef37-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.866652 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/bfb2126d-4cee-451f-8867-6f098453ef37-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.867942 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bfb2126d-4cee-451f-8867-6f098453ef37-config-data\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.882388 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v9hl\" (UniqueName: \"kubernetes.io/projected/bfb2126d-4cee-451f-8867-6f098453ef37-kube-api-access-4v9hl\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:35 crc kubenswrapper[4865]: I0123 12:17:35.943836 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-server-0\" (UID: \"bfb2126d-4cee-451f-8867-6f098453ef37\") " pod="openstack/rabbitmq-server-0" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.020624 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.171376 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a07490-f361-43e5-8d3e-a8bd917b3b84" path="/var/lib/kubelet/pods/10a07490-f361-43e5-8d3e-a8bd917b3b84/volumes" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.172759 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb7983c-3aed-42f5-8635-8188f7abb9d5" path="/var/lib/kubelet/pods/ebb7983c-3aed-42f5-8635-8188f7abb9d5/volumes" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.175079 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.588512 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b2b3256c-585c-4ce5-9f99-400086a0117e","Type":"ContainerStarted","Data":"f9ee77afd02ac60e683ad6b2af6b81d782ffb9bf4e7991073e6c15809992d3f5"} Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.625670 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 23 12:17:36 crc kubenswrapper[4865]: W0123 12:17:36.627444 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfb2126d_4cee_451f_8867_6f098453ef37.slice/crio-47393c411451803350e521af688534d1624939c2c2417a0729d3692539f64873 WatchSource:0}: Error finding container 47393c411451803350e521af688534d1624939c2c2417a0729d3692539f64873: Status 404 returned error can't find the container with id 47393c411451803350e521af688534d1624939c2c2417a0729d3692539f64873 Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.900466 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-796fd86499-rh4wg"] Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.903617 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.910408 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.911714 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-796fd86499-rh4wg"] Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.985325 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-swift-storage-0\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.985388 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-openstack-edpm-ipam\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.985426 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-config\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.985460 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-svc\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.985476 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2t72\" (UniqueName: \"kubernetes.io/projected/48b35dfa-4f99-4293-9b99-1b0e1f842af6-kube-api-access-s2t72\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.985500 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-nb\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:36 crc kubenswrapper[4865]: I0123 12:17:36.985530 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-sb\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.087716 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-openstack-edpm-ipam\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.087782 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-config\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.087820 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-svc\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.087839 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2t72\" (UniqueName: \"kubernetes.io/projected/48b35dfa-4f99-4293-9b99-1b0e1f842af6-kube-api-access-s2t72\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.087866 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-nb\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.087900 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-sb\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.087992 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-swift-storage-0\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.088828 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-openstack-edpm-ipam\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.088853 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-nb\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.088967 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-swift-storage-0\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.089096 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-sb\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.089210 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-config\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.089432 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-svc\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.110192 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2t72\" (UniqueName: \"kubernetes.io/projected/48b35dfa-4f99-4293-9b99-1b0e1f842af6-kube-api-access-s2t72\") pod \"dnsmasq-dns-796fd86499-rh4wg\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.220564 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.619866 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bfb2126d-4cee-451f-8867-6f098453ef37","Type":"ContainerStarted","Data":"47393c411451803350e521af688534d1624939c2c2417a0729d3692539f64873"} Jan 23 12:17:37 crc kubenswrapper[4865]: I0123 12:17:37.748278 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-796fd86499-rh4wg"] Jan 23 12:17:38 crc kubenswrapper[4865]: I0123 12:17:38.631269 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bfb2126d-4cee-451f-8867-6f098453ef37","Type":"ContainerStarted","Data":"f3be8e2bf6e1814acaf0859b5fcd509cb46d2955bb9fa52bbea43917acebcc8a"} Jan 23 12:17:38 crc kubenswrapper[4865]: I0123 12:17:38.634378 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b2b3256c-585c-4ce5-9f99-400086a0117e","Type":"ContainerStarted","Data":"a80232b0d8bf5cda6841d8ea95111eb710021eea608fb19ed140989e4ada775f"} Jan 23 12:17:38 crc kubenswrapper[4865]: I0123 12:17:38.640293 4865 generic.go:334] "Generic (PLEG): container finished" podID="48b35dfa-4f99-4293-9b99-1b0e1f842af6" containerID="c526b453f145330a83879041b22fc7c67eed138267fa98aaf3f8f99c4ebab451" exitCode=0 Jan 23 12:17:38 crc kubenswrapper[4865]: I0123 12:17:38.640340 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" event={"ID":"48b35dfa-4f99-4293-9b99-1b0e1f842af6","Type":"ContainerDied","Data":"c526b453f145330a83879041b22fc7c67eed138267fa98aaf3f8f99c4ebab451"} Jan 23 12:17:38 crc kubenswrapper[4865]: I0123 12:17:38.640381 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" event={"ID":"48b35dfa-4f99-4293-9b99-1b0e1f842af6","Type":"ContainerStarted","Data":"73c240812b3e7adf9b722b40b5fd4226081186e40466280920af53d11eb5e2f1"} Jan 23 12:17:39 crc kubenswrapper[4865]: I0123 12:17:39.658611 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" event={"ID":"48b35dfa-4f99-4293-9b99-1b0e1f842af6","Type":"ContainerStarted","Data":"c0b01e2518ffed6416b3cbfa767606438d08ce958b5b51a9239a6b905a257dc5"} Jan 23 12:17:39 crc kubenswrapper[4865]: I0123 12:17:39.668143 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:39 crc kubenswrapper[4865]: I0123 12:17:39.696452 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" podStartSLOduration=3.696434337 podStartE2EDuration="3.696434337s" podCreationTimestamp="2026-01-23 12:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:17:39.68918071 +0000 UTC m=+1503.858252946" watchObservedRunningTime="2026-01-23 12:17:39.696434337 +0000 UTC m=+1503.865506563" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.222844 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.313890 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbff7fb87-g98dn"] Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.314224 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" podUID="b0f4e90e-919f-4354-b584-a1516961888c" containerName="dnsmasq-dns" containerID="cri-o://20ae29404706f7a2bb46907205e967857b09335143038b6c05b3a6561337384a" gracePeriod=10 Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.545034 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6694597cff-rq925"] Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.546968 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.567748 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6694597cff-rq925"] Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.611136 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-openstack-edpm-ipam\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.611205 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-dns-swift-storage-0\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.611256 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brp5k\" (UniqueName: \"kubernetes.io/projected/f2b055dd-f8d4-4881-9183-f69dac57cef3-kube-api-access-brp5k\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.611315 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-dns-svc\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.611356 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-ovsdbserver-nb\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.611468 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-ovsdbserver-sb\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.611512 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-config\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.713694 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-ovsdbserver-sb\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.713771 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-config\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.713836 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-openstack-edpm-ipam\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.713867 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-dns-swift-storage-0\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.713899 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brp5k\" (UniqueName: \"kubernetes.io/projected/f2b055dd-f8d4-4881-9183-f69dac57cef3-kube-api-access-brp5k\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.713937 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-dns-svc\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.713963 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-ovsdbserver-nb\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.714773 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-ovsdbserver-nb\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.716492 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-dns-swift-storage-0\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.718925 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-openstack-edpm-ipam\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.721266 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-ovsdbserver-sb\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.751946 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-dns-svc\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.752960 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b055dd-f8d4-4881-9183-f69dac57cef3-config\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.810478 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brp5k\" (UniqueName: \"kubernetes.io/projected/f2b055dd-f8d4-4881-9183-f69dac57cef3-kube-api-access-brp5k\") pod \"dnsmasq-dns-6694597cff-rq925\" (UID: \"f2b055dd-f8d4-4881-9183-f69dac57cef3\") " pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.817414 4865 generic.go:334] "Generic (PLEG): container finished" podID="b0f4e90e-919f-4354-b584-a1516961888c" containerID="20ae29404706f7a2bb46907205e967857b09335143038b6c05b3a6561337384a" exitCode=0 Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.817496 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" event={"ID":"b0f4e90e-919f-4354-b584-a1516961888c","Type":"ContainerDied","Data":"20ae29404706f7a2bb46907205e967857b09335143038b6c05b3a6561337384a"} Jan 23 12:17:47 crc kubenswrapper[4865]: I0123 12:17:47.873817 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.118284 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.225348 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-nb\") pod \"b0f4e90e-919f-4354-b584-a1516961888c\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.225415 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s6lh\" (UniqueName: \"kubernetes.io/projected/b0f4e90e-919f-4354-b584-a1516961888c-kube-api-access-9s6lh\") pod \"b0f4e90e-919f-4354-b584-a1516961888c\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.225472 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-svc\") pod \"b0f4e90e-919f-4354-b584-a1516961888c\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.225501 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-swift-storage-0\") pod \"b0f4e90e-919f-4354-b584-a1516961888c\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.225546 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-sb\") pod \"b0f4e90e-919f-4354-b584-a1516961888c\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.225725 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-config\") pod \"b0f4e90e-919f-4354-b584-a1516961888c\" (UID: \"b0f4e90e-919f-4354-b584-a1516961888c\") " Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.249048 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f4e90e-919f-4354-b584-a1516961888c-kube-api-access-9s6lh" (OuterVolumeSpecName: "kube-api-access-9s6lh") pod "b0f4e90e-919f-4354-b584-a1516961888c" (UID: "b0f4e90e-919f-4354-b584-a1516961888c"). InnerVolumeSpecName "kube-api-access-9s6lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.328618 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s6lh\" (UniqueName: \"kubernetes.io/projected/b0f4e90e-919f-4354-b584-a1516961888c-kube-api-access-9s6lh\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.368780 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b0f4e90e-919f-4354-b584-a1516961888c" (UID: "b0f4e90e-919f-4354-b584-a1516961888c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.374154 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-config" (OuterVolumeSpecName: "config") pod "b0f4e90e-919f-4354-b584-a1516961888c" (UID: "b0f4e90e-919f-4354-b584-a1516961888c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.387142 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b0f4e90e-919f-4354-b584-a1516961888c" (UID: "b0f4e90e-919f-4354-b584-a1516961888c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.396173 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b0f4e90e-919f-4354-b584-a1516961888c" (UID: "b0f4e90e-919f-4354-b584-a1516961888c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.408402 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b0f4e90e-919f-4354-b584-a1516961888c" (UID: "b0f4e90e-919f-4354-b584-a1516961888c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.431939 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.432189 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.432248 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.432299 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.432351 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0f4e90e-919f-4354-b584-a1516961888c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.570335 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6694597cff-rq925"] Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.826413 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6694597cff-rq925" event={"ID":"f2b055dd-f8d4-4881-9183-f69dac57cef3","Type":"ContainerStarted","Data":"a628d9d2d6b65f8f498a95ae5080665bb99fb541936d40c8878f869d1016b525"} Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.826458 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6694597cff-rq925" event={"ID":"f2b055dd-f8d4-4881-9183-f69dac57cef3","Type":"ContainerStarted","Data":"f93c172fc3e2610f60a75ec2464c7cbb496068e5905b3302e0fdbd7ef397be76"} Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.828187 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" event={"ID":"b0f4e90e-919f-4354-b584-a1516961888c","Type":"ContainerDied","Data":"db276088ff7a99d7275f946f870f334029878766562a783d968682273f395889"} Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.828224 4865 scope.go:117] "RemoveContainer" containerID="20ae29404706f7a2bb46907205e967857b09335143038b6c05b3a6561337384a" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.828252 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.846648 4865 scope.go:117] "RemoveContainer" containerID="fdb6733c6cd5d6cf86c30e84748f0b395a0251ac5e3f58cc75142286fd5b5222" Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.916004 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbff7fb87-g98dn"] Jan 23 12:17:48 crc kubenswrapper[4865]: I0123 12:17:48.926841 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbff7fb87-g98dn"] Jan 23 12:17:49 crc kubenswrapper[4865]: I0123 12:17:49.845170 4865 generic.go:334] "Generic (PLEG): container finished" podID="f2b055dd-f8d4-4881-9183-f69dac57cef3" containerID="a628d9d2d6b65f8f498a95ae5080665bb99fb541936d40c8878f869d1016b525" exitCode=0 Jan 23 12:17:49 crc kubenswrapper[4865]: I0123 12:17:49.845275 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6694597cff-rq925" event={"ID":"f2b055dd-f8d4-4881-9183-f69dac57cef3","Type":"ContainerDied","Data":"a628d9d2d6b65f8f498a95ae5080665bb99fb541936d40c8878f869d1016b525"} Jan 23 12:17:50 crc kubenswrapper[4865]: I0123 12:17:50.131337 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0f4e90e-919f-4354-b584-a1516961888c" path="/var/lib/kubelet/pods/b0f4e90e-919f-4354-b584-a1516961888c/volumes" Jan 23 12:17:50 crc kubenswrapper[4865]: I0123 12:17:50.857624 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6694597cff-rq925" event={"ID":"f2b055dd-f8d4-4881-9183-f69dac57cef3","Type":"ContainerStarted","Data":"13a71a6adf2614137ce075eca182886d646a6466ccf8a42d9c172eea1763d288"} Jan 23 12:17:50 crc kubenswrapper[4865]: I0123 12:17:50.858022 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:51 crc kubenswrapper[4865]: I0123 12:17:51.049355 4865 scope.go:117] "RemoveContainer" containerID="96b2d55ead926d291c13e7e950bbbbb89fd4d945bceaa2ae8877643952a6aa29" Jan 23 12:17:52 crc kubenswrapper[4865]: I0123 12:17:52.903828 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5fbff7fb87-g98dn" podUID="b0f4e90e-919f-4354-b584-a1516961888c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.215:5353: i/o timeout" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.190145 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6694597cff-rq925" podStartSLOduration=7.190123349 podStartE2EDuration="7.190123349s" podCreationTimestamp="2026-01-23 12:17:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:17:51.069627121 +0000 UTC m=+1515.238699347" watchObservedRunningTime="2026-01-23 12:17:54.190123349 +0000 UTC m=+1518.359195575" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.192964 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gdxtr"] Jan 23 12:17:54 crc kubenswrapper[4865]: E0123 12:17:54.193443 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f4e90e-919f-4354-b584-a1516961888c" containerName="init" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.193467 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f4e90e-919f-4354-b584-a1516961888c" containerName="init" Jan 23 12:17:54 crc kubenswrapper[4865]: E0123 12:17:54.193489 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f4e90e-919f-4354-b584-a1516961888c" containerName="dnsmasq-dns" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.193500 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f4e90e-919f-4354-b584-a1516961888c" containerName="dnsmasq-dns" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.193754 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f4e90e-919f-4354-b584-a1516961888c" containerName="dnsmasq-dns" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.195523 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.206132 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdxtr"] Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.243795 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzhg5\" (UniqueName: \"kubernetes.io/projected/bab072a2-72db-468a-ba30-4270b5ea988c-kube-api-access-jzhg5\") pod \"community-operators-gdxtr\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.243888 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-catalog-content\") pod \"community-operators-gdxtr\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.243955 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-utilities\") pod \"community-operators-gdxtr\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.345894 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzhg5\" (UniqueName: \"kubernetes.io/projected/bab072a2-72db-468a-ba30-4270b5ea988c-kube-api-access-jzhg5\") pod \"community-operators-gdxtr\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.345976 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-catalog-content\") pod \"community-operators-gdxtr\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.346014 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-utilities\") pod \"community-operators-gdxtr\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.346590 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-utilities\") pod \"community-operators-gdxtr\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.346855 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-catalog-content\") pod \"community-operators-gdxtr\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.375788 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzhg5\" (UniqueName: \"kubernetes.io/projected/bab072a2-72db-468a-ba30-4270b5ea988c-kube-api-access-jzhg5\") pod \"community-operators-gdxtr\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:54 crc kubenswrapper[4865]: I0123 12:17:54.527344 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:17:55 crc kubenswrapper[4865]: I0123 12:17:55.126719 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gdxtr"] Jan 23 12:17:55 crc kubenswrapper[4865]: I0123 12:17:55.927994 4865 generic.go:334] "Generic (PLEG): container finished" podID="bab072a2-72db-468a-ba30-4270b5ea988c" containerID="cba005e136be3f79a59a9bf17cd03b9f5de759aceddc9df5efc3d5630e6d3861" exitCode=0 Jan 23 12:17:55 crc kubenswrapper[4865]: I0123 12:17:55.928206 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdxtr" event={"ID":"bab072a2-72db-468a-ba30-4270b5ea988c","Type":"ContainerDied","Data":"cba005e136be3f79a59a9bf17cd03b9f5de759aceddc9df5efc3d5630e6d3861"} Jan 23 12:17:55 crc kubenswrapper[4865]: I0123 12:17:55.928300 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdxtr" event={"ID":"bab072a2-72db-468a-ba30-4270b5ea988c","Type":"ContainerStarted","Data":"7ffc20702aeba7b75acef99ec772b36d714cb7b86005476eefca961c984516de"} Jan 23 12:17:55 crc kubenswrapper[4865]: I0123 12:17:55.931628 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 12:17:57 crc kubenswrapper[4865]: I0123 12:17:57.876493 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6694597cff-rq925" Jan 23 12:17:57 crc kubenswrapper[4865]: I0123 12:17:57.955294 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdxtr" event={"ID":"bab072a2-72db-468a-ba30-4270b5ea988c","Type":"ContainerStarted","Data":"0f853837e9bf6f89de727548fadf7703732618034a391a222c9f13f9d8776f8e"} Jan 23 12:17:57 crc kubenswrapper[4865]: I0123 12:17:57.963640 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-796fd86499-rh4wg"] Jan 23 12:17:57 crc kubenswrapper[4865]: I0123 12:17:57.963995 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" podUID="48b35dfa-4f99-4293-9b99-1b0e1f842af6" containerName="dnsmasq-dns" containerID="cri-o://c0b01e2518ffed6416b3cbfa767606438d08ce958b5b51a9239a6b905a257dc5" gracePeriod=10 Jan 23 12:17:58 crc kubenswrapper[4865]: I0123 12:17:58.967461 4865 generic.go:334] "Generic (PLEG): container finished" podID="48b35dfa-4f99-4293-9b99-1b0e1f842af6" containerID="c0b01e2518ffed6416b3cbfa767606438d08ce958b5b51a9239a6b905a257dc5" exitCode=0 Jan 23 12:17:58 crc kubenswrapper[4865]: I0123 12:17:58.967514 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" event={"ID":"48b35dfa-4f99-4293-9b99-1b0e1f842af6","Type":"ContainerDied","Data":"c0b01e2518ffed6416b3cbfa767606438d08ce958b5b51a9239a6b905a257dc5"} Jan 23 12:18:00 crc kubenswrapper[4865]: I0123 12:18:00.973174 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:18:00 crc kubenswrapper[4865]: I0123 12:18:00.973255 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:18:02 crc kubenswrapper[4865]: I0123 12:18:02.222207 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" podUID="48b35dfa-4f99-4293-9b99-1b0e1f842af6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.226:5353: connect: connection refused" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.006993 4865 generic.go:334] "Generic (PLEG): container finished" podID="bab072a2-72db-468a-ba30-4270b5ea988c" containerID="0f853837e9bf6f89de727548fadf7703732618034a391a222c9f13f9d8776f8e" exitCode=0 Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.007191 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdxtr" event={"ID":"bab072a2-72db-468a-ba30-4270b5ea988c","Type":"ContainerDied","Data":"0f853837e9bf6f89de727548fadf7703732618034a391a222c9f13f9d8776f8e"} Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.645951 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.767195 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-svc\") pod \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.767285 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2t72\" (UniqueName: \"kubernetes.io/projected/48b35dfa-4f99-4293-9b99-1b0e1f842af6-kube-api-access-s2t72\") pod \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.767447 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-openstack-edpm-ipam\") pod \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.767473 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-nb\") pod \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.767518 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-swift-storage-0\") pod \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.767544 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-sb\") pod \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.767593 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-config\") pod \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\" (UID: \"48b35dfa-4f99-4293-9b99-1b0e1f842af6\") " Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.772673 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b35dfa-4f99-4293-9b99-1b0e1f842af6-kube-api-access-s2t72" (OuterVolumeSpecName: "kube-api-access-s2t72") pod "48b35dfa-4f99-4293-9b99-1b0e1f842af6" (UID: "48b35dfa-4f99-4293-9b99-1b0e1f842af6"). InnerVolumeSpecName "kube-api-access-s2t72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.843064 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48b35dfa-4f99-4293-9b99-1b0e1f842af6" (UID: "48b35dfa-4f99-4293-9b99-1b0e1f842af6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.845378 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48b35dfa-4f99-4293-9b99-1b0e1f842af6" (UID: "48b35dfa-4f99-4293-9b99-1b0e1f842af6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.847771 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48b35dfa-4f99-4293-9b99-1b0e1f842af6" (UID: "48b35dfa-4f99-4293-9b99-1b0e1f842af6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.853934 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-config" (OuterVolumeSpecName: "config") pod "48b35dfa-4f99-4293-9b99-1b0e1f842af6" (UID: "48b35dfa-4f99-4293-9b99-1b0e1f842af6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.854281 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "48b35dfa-4f99-4293-9b99-1b0e1f842af6" (UID: "48b35dfa-4f99-4293-9b99-1b0e1f842af6"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.854359 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "48b35dfa-4f99-4293-9b99-1b0e1f842af6" (UID: "48b35dfa-4f99-4293-9b99-1b0e1f842af6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.870245 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.870295 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.870306 4865 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.870317 4865 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.870327 4865 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.870337 4865 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48b35dfa-4f99-4293-9b99-1b0e1f842af6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 23 12:18:03 crc kubenswrapper[4865]: I0123 12:18:03.870345 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2t72\" (UniqueName: \"kubernetes.io/projected/48b35dfa-4f99-4293-9b99-1b0e1f842af6-kube-api-access-s2t72\") on node \"crc\" DevicePath \"\"" Jan 23 12:18:04 crc kubenswrapper[4865]: I0123 12:18:04.022582 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" event={"ID":"48b35dfa-4f99-4293-9b99-1b0e1f842af6","Type":"ContainerDied","Data":"73c240812b3e7adf9b722b40b5fd4226081186e40466280920af53d11eb5e2f1"} Jan 23 12:18:04 crc kubenswrapper[4865]: I0123 12:18:04.022820 4865 scope.go:117] "RemoveContainer" containerID="c0b01e2518ffed6416b3cbfa767606438d08ce958b5b51a9239a6b905a257dc5" Jan 23 12:18:04 crc kubenswrapper[4865]: I0123 12:18:04.022700 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-796fd86499-rh4wg" Jan 23 12:18:04 crc kubenswrapper[4865]: I0123 12:18:04.067115 4865 scope.go:117] "RemoveContainer" containerID="c526b453f145330a83879041b22fc7c67eed138267fa98aaf3f8f99c4ebab451" Jan 23 12:18:04 crc kubenswrapper[4865]: I0123 12:18:04.074232 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-796fd86499-rh4wg"] Jan 23 12:18:04 crc kubenswrapper[4865]: I0123 12:18:04.084022 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-796fd86499-rh4wg"] Jan 23 12:18:04 crc kubenswrapper[4865]: I0123 12:18:04.128358 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b35dfa-4f99-4293-9b99-1b0e1f842af6" path="/var/lib/kubelet/pods/48b35dfa-4f99-4293-9b99-1b0e1f842af6/volumes" Jan 23 12:18:06 crc kubenswrapper[4865]: I0123 12:18:06.062081 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdxtr" event={"ID":"bab072a2-72db-468a-ba30-4270b5ea988c","Type":"ContainerStarted","Data":"c145f51b5520c02c0e9bfd33c525b2013a9bd1f17be80b95cebd8e98fc4d5dd8"} Jan 23 12:18:06 crc kubenswrapper[4865]: I0123 12:18:06.087492 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gdxtr" podStartSLOduration=2.834893314 podStartE2EDuration="12.087468658s" podCreationTimestamp="2026-01-23 12:17:54 +0000 UTC" firstStartedPulling="2026-01-23 12:17:55.931239753 +0000 UTC m=+1520.100311979" lastFinishedPulling="2026-01-23 12:18:05.183815097 +0000 UTC m=+1529.352887323" observedRunningTime="2026-01-23 12:18:06.080225561 +0000 UTC m=+1530.249297787" watchObservedRunningTime="2026-01-23 12:18:06.087468658 +0000 UTC m=+1530.256540884" Jan 23 12:18:11 crc kubenswrapper[4865]: I0123 12:18:11.105638 4865 generic.go:334] "Generic (PLEG): container finished" podID="bfb2126d-4cee-451f-8867-6f098453ef37" containerID="f3be8e2bf6e1814acaf0859b5fcd509cb46d2955bb9fa52bbea43917acebcc8a" exitCode=0 Jan 23 12:18:11 crc kubenswrapper[4865]: I0123 12:18:11.105699 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bfb2126d-4cee-451f-8867-6f098453ef37","Type":"ContainerDied","Data":"f3be8e2bf6e1814acaf0859b5fcd509cb46d2955bb9fa52bbea43917acebcc8a"} Jan 23 12:18:11 crc kubenswrapper[4865]: I0123 12:18:11.110002 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2b3256c-585c-4ce5-9f99-400086a0117e" containerID="a80232b0d8bf5cda6841d8ea95111eb710021eea608fb19ed140989e4ada775f" exitCode=0 Jan 23 12:18:11 crc kubenswrapper[4865]: I0123 12:18:11.110165 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b2b3256c-585c-4ce5-9f99-400086a0117e","Type":"ContainerDied","Data":"a80232b0d8bf5cda6841d8ea95111eb710021eea608fb19ed140989e4ada775f"} Jan 23 12:18:12 crc kubenswrapper[4865]: I0123 12:18:12.159982 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"bfb2126d-4cee-451f-8867-6f098453ef37","Type":"ContainerStarted","Data":"56734979269c8358bc897eeb403a3c9375bbfc7d670274f3c7f55e9f59d95bcf"} Jan 23 12:18:12 crc kubenswrapper[4865]: I0123 12:18:12.160315 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b2b3256c-585c-4ce5-9f99-400086a0117e","Type":"ContainerStarted","Data":"bc02766d3dcfaf0e1d1a2c3d99a7b9a3fc788974c146e1e2f721e18c7eb343e3"} Jan 23 12:18:13 crc kubenswrapper[4865]: I0123 12:18:13.162221 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:18:13 crc kubenswrapper[4865]: I0123 12:18:13.162969 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 23 12:18:13 crc kubenswrapper[4865]: I0123 12:18:13.191968 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.191946734 podStartE2EDuration="38.191946734s" podCreationTimestamp="2026-01-23 12:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:18:13.186808858 +0000 UTC m=+1537.355881084" watchObservedRunningTime="2026-01-23 12:18:13.191946734 +0000 UTC m=+1537.361018960" Jan 23 12:18:13 crc kubenswrapper[4865]: I0123 12:18:13.222251 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.222231954 podStartE2EDuration="38.222231954s" podCreationTimestamp="2026-01-23 12:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:18:13.212755982 +0000 UTC m=+1537.381828238" watchObservedRunningTime="2026-01-23 12:18:13.222231954 +0000 UTC m=+1537.391304180" Jan 23 12:18:14 crc kubenswrapper[4865]: I0123 12:18:14.528349 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:18:14 crc kubenswrapper[4865]: I0123 12:18:14.528399 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:18:14 crc kubenswrapper[4865]: I0123 12:18:14.584930 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:18:15 crc kubenswrapper[4865]: I0123 12:18:15.232809 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:18:15 crc kubenswrapper[4865]: I0123 12:18:15.303805 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gdxtr"] Jan 23 12:18:17 crc kubenswrapper[4865]: I0123 12:18:17.197058 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gdxtr" podUID="bab072a2-72db-468a-ba30-4270b5ea988c" containerName="registry-server" containerID="cri-o://c145f51b5520c02c0e9bfd33c525b2013a9bd1f17be80b95cebd8e98fc4d5dd8" gracePeriod=2 Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.214810 4865 generic.go:334] "Generic (PLEG): container finished" podID="bab072a2-72db-468a-ba30-4270b5ea988c" containerID="c145f51b5520c02c0e9bfd33c525b2013a9bd1f17be80b95cebd8e98fc4d5dd8" exitCode=0 Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.215017 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdxtr" event={"ID":"bab072a2-72db-468a-ba30-4270b5ea988c","Type":"ContainerDied","Data":"c145f51b5520c02c0e9bfd33c525b2013a9bd1f17be80b95cebd8e98fc4d5dd8"} Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.648349 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.704402 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-catalog-content\") pod \"bab072a2-72db-468a-ba30-4270b5ea988c\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.704822 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-utilities\") pod \"bab072a2-72db-468a-ba30-4270b5ea988c\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.704989 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzhg5\" (UniqueName: \"kubernetes.io/projected/bab072a2-72db-468a-ba30-4270b5ea988c-kube-api-access-jzhg5\") pod \"bab072a2-72db-468a-ba30-4270b5ea988c\" (UID: \"bab072a2-72db-468a-ba30-4270b5ea988c\") " Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.705496 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-utilities" (OuterVolumeSpecName: "utilities") pod "bab072a2-72db-468a-ba30-4270b5ea988c" (UID: "bab072a2-72db-468a-ba30-4270b5ea988c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.705925 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.730413 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab072a2-72db-468a-ba30-4270b5ea988c-kube-api-access-jzhg5" (OuterVolumeSpecName: "kube-api-access-jzhg5") pod "bab072a2-72db-468a-ba30-4270b5ea988c" (UID: "bab072a2-72db-468a-ba30-4270b5ea988c"). InnerVolumeSpecName "kube-api-access-jzhg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.768326 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bab072a2-72db-468a-ba30-4270b5ea988c" (UID: "bab072a2-72db-468a-ba30-4270b5ea988c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.808271 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzhg5\" (UniqueName: \"kubernetes.io/projected/bab072a2-72db-468a-ba30-4270b5ea988c-kube-api-access-jzhg5\") on node \"crc\" DevicePath \"\"" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.808315 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab072a2-72db-468a-ba30-4270b5ea988c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.924341 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp"] Jan 23 12:18:18 crc kubenswrapper[4865]: E0123 12:18:18.924782 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab072a2-72db-468a-ba30-4270b5ea988c" containerName="extract-content" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.924801 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab072a2-72db-468a-ba30-4270b5ea988c" containerName="extract-content" Jan 23 12:18:18 crc kubenswrapper[4865]: E0123 12:18:18.924818 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b35dfa-4f99-4293-9b99-1b0e1f842af6" containerName="dnsmasq-dns" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.924824 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b35dfa-4f99-4293-9b99-1b0e1f842af6" containerName="dnsmasq-dns" Jan 23 12:18:18 crc kubenswrapper[4865]: E0123 12:18:18.924838 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab072a2-72db-468a-ba30-4270b5ea988c" containerName="extract-utilities" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.924847 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab072a2-72db-468a-ba30-4270b5ea988c" containerName="extract-utilities" Jan 23 12:18:18 crc kubenswrapper[4865]: E0123 12:18:18.924874 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48b35dfa-4f99-4293-9b99-1b0e1f842af6" containerName="init" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.924882 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="48b35dfa-4f99-4293-9b99-1b0e1f842af6" containerName="init" Jan 23 12:18:18 crc kubenswrapper[4865]: E0123 12:18:18.924894 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab072a2-72db-468a-ba30-4270b5ea988c" containerName="registry-server" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.924900 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab072a2-72db-468a-ba30-4270b5ea988c" containerName="registry-server" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.925084 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab072a2-72db-468a-ba30-4270b5ea988c" containerName="registry-server" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.925114 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b35dfa-4f99-4293-9b99-1b0e1f842af6" containerName="dnsmasq-dns" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.925871 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.932199 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.932374 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.932417 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.932714 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:18:18 crc kubenswrapper[4865]: I0123 12:18:18.950948 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp"] Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.011538 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.011609 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.011637 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.011688 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hhm\" (UniqueName: \"kubernetes.io/projected/078d8d59-03bd-49c7-a460-c38313812bac-kube-api-access-v5hhm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.114094 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.114146 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.114170 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.114205 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hhm\" (UniqueName: \"kubernetes.io/projected/078d8d59-03bd-49c7-a460-c38313812bac-kube-api-access-v5hhm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.117387 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.118487 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.121172 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.130692 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hhm\" (UniqueName: \"kubernetes.io/projected/078d8d59-03bd-49c7-a460-c38313812bac-kube-api-access-v5hhm\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.226059 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gdxtr" event={"ID":"bab072a2-72db-468a-ba30-4270b5ea988c","Type":"ContainerDied","Data":"7ffc20702aeba7b75acef99ec772b36d714cb7b86005476eefca961c984516de"} Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.226116 4865 scope.go:117] "RemoveContainer" containerID="c145f51b5520c02c0e9bfd33c525b2013a9bd1f17be80b95cebd8e98fc4d5dd8" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.226252 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gdxtr" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.243329 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.263724 4865 scope.go:117] "RemoveContainer" containerID="0f853837e9bf6f89de727548fadf7703732618034a391a222c9f13f9d8776f8e" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.317212 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gdxtr"] Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.327139 4865 scope.go:117] "RemoveContainer" containerID="cba005e136be3f79a59a9bf17cd03b9f5de759aceddc9df5efc3d5630e6d3861" Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.341160 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gdxtr"] Jan 23 12:18:19 crc kubenswrapper[4865]: I0123 12:18:19.965001 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp"] Jan 23 12:18:19 crc kubenswrapper[4865]: W0123 12:18:19.983100 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod078d8d59_03bd_49c7_a460_c38313812bac.slice/crio-4613467bb468b30581ffcded1db8785e86f4914c1253326eb2198572fb646482 WatchSource:0}: Error finding container 4613467bb468b30581ffcded1db8785e86f4914c1253326eb2198572fb646482: Status 404 returned error can't find the container with id 4613467bb468b30581ffcded1db8785e86f4914c1253326eb2198572fb646482 Jan 23 12:18:20 crc kubenswrapper[4865]: I0123 12:18:20.128303 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab072a2-72db-468a-ba30-4270b5ea988c" path="/var/lib/kubelet/pods/bab072a2-72db-468a-ba30-4270b5ea988c/volumes" Jan 23 12:18:20 crc kubenswrapper[4865]: I0123 12:18:20.234348 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" event={"ID":"078d8d59-03bd-49c7-a460-c38313812bac","Type":"ContainerStarted","Data":"4613467bb468b30581ffcded1db8785e86f4914c1253326eb2198572fb646482"} Jan 23 12:18:25 crc kubenswrapper[4865]: I0123 12:18:25.559801 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 23 12:18:26 crc kubenswrapper[4865]: I0123 12:18:26.024683 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 23 12:18:36 crc kubenswrapper[4865]: E0123 12:18:36.213910 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 23 12:18:36 crc kubenswrapper[4865]: E0123 12:18:36.214528 4865 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 23 12:18:36 crc kubenswrapper[4865]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 23 12:18:36 crc kubenswrapper[4865]: - hosts: all Jan 23 12:18:36 crc kubenswrapper[4865]: strategy: linear Jan 23 12:18:36 crc kubenswrapper[4865]: tasks: Jan 23 12:18:36 crc kubenswrapper[4865]: - name: Enable podified-repos Jan 23 12:18:36 crc kubenswrapper[4865]: become: true Jan 23 12:18:36 crc kubenswrapper[4865]: ansible.builtin.shell: | Jan 23 12:18:36 crc kubenswrapper[4865]: set -euxo pipefail Jan 23 12:18:36 crc kubenswrapper[4865]: pushd /var/tmp Jan 23 12:18:36 crc kubenswrapper[4865]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 23 12:18:36 crc kubenswrapper[4865]: pushd repo-setup-main Jan 23 12:18:36 crc kubenswrapper[4865]: python3 -m venv ./venv Jan 23 12:18:36 crc kubenswrapper[4865]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 23 12:18:36 crc kubenswrapper[4865]: ./venv/bin/repo-setup current-podified -b antelope Jan 23 12:18:36 crc kubenswrapper[4865]: popd Jan 23 12:18:36 crc kubenswrapper[4865]: rm -rf repo-setup-main Jan 23 12:18:36 crc kubenswrapper[4865]: Jan 23 12:18:36 crc kubenswrapper[4865]: Jan 23 12:18:36 crc kubenswrapper[4865]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 23 12:18:36 crc kubenswrapper[4865]: edpm_override_hosts: openstack-edpm-ipam Jan 23 12:18:36 crc kubenswrapper[4865]: edpm_service_type: repo-setup Jan 23 12:18:36 crc kubenswrapper[4865]: Jan 23 12:18:36 crc kubenswrapper[4865]: Jan 23 12:18:36 crc kubenswrapper[4865]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5hhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp_openstack(078d8d59-03bd-49c7-a460-c38313812bac): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 23 12:18:36 crc kubenswrapper[4865]: > logger="UnhandledError" Jan 23 12:18:36 crc kubenswrapper[4865]: E0123 12:18:36.215703 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" podUID="078d8d59-03bd-49c7-a460-c38313812bac" Jan 23 12:18:36 crc kubenswrapper[4865]: E0123 12:18:36.400980 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" podUID="078d8d59-03bd-49c7-a460-c38313812bac" Jan 23 12:18:48 crc kubenswrapper[4865]: I0123 12:18:48.564143 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:18:49 crc kubenswrapper[4865]: I0123 12:18:49.535044 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" event={"ID":"078d8d59-03bd-49c7-a460-c38313812bac","Type":"ContainerStarted","Data":"10310ec6a83b9a79f99d2fa0eecfae0c1493d431a7fa6c41e576d615b4b3a632"} Jan 23 12:18:49 crc kubenswrapper[4865]: I0123 12:18:49.570435 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" podStartSLOduration=2.995574801 podStartE2EDuration="31.570415978s" podCreationTimestamp="2026-01-23 12:18:18 +0000 UTC" firstStartedPulling="2026-01-23 12:18:19.986832102 +0000 UTC m=+1544.155904328" lastFinishedPulling="2026-01-23 12:18:48.561673279 +0000 UTC m=+1572.730745505" observedRunningTime="2026-01-23 12:18:49.557152955 +0000 UTC m=+1573.726225191" watchObservedRunningTime="2026-01-23 12:18:49.570415978 +0000 UTC m=+1573.739488204" Jan 23 12:18:51 crc kubenswrapper[4865]: I0123 12:18:51.163011 4865 scope.go:117] "RemoveContainer" containerID="8ca271485ab203ab4686b756c0caf3b9ab346a773344a13b2f2784b3e6f037f8" Jan 23 12:18:51 crc kubenswrapper[4865]: I0123 12:18:51.187320 4865 scope.go:117] "RemoveContainer" containerID="a869947817e7940600226a3079e8946e920766069a180c4538aec29c05190d1c" Jan 23 12:18:51 crc kubenswrapper[4865]: I0123 12:18:51.207633 4865 scope.go:117] "RemoveContainer" containerID="57c67591266883497ef0bdbc1e43546d32e015ace7ba8c07961655549d534fd1" Jan 23 12:18:51 crc kubenswrapper[4865]: I0123 12:18:51.229319 4865 scope.go:117] "RemoveContainer" containerID="3ef2c875e2abe1a8810c86e36653524ea0fae9936696e5c4e3f0ea65fa7bbafc" Jan 23 12:18:51 crc kubenswrapper[4865]: I0123 12:18:51.254908 4865 scope.go:117] "RemoveContainer" containerID="7ed173f4be5ac5a3351c319876ccac9b2a22b1038a98c603ffb24312ae6d635d" Jan 23 12:18:51 crc kubenswrapper[4865]: I0123 12:18:51.274257 4865 scope.go:117] "RemoveContainer" containerID="d2830aa1a5fae675c81384ff4ceb36e7d7766546609eeb6684c1b1137fc42610" Jan 23 12:18:51 crc kubenswrapper[4865]: I0123 12:18:51.299114 4865 scope.go:117] "RemoveContainer" containerID="e8c560f6a96e10829538cb3ad68eaf34a62dfc2c6e388bbba51ded3ea1a7fbb3" Jan 23 12:18:51 crc kubenswrapper[4865]: I0123 12:18:51.320560 4865 scope.go:117] "RemoveContainer" containerID="e5d727a5155cb3a4ec53a69a21563e71943e7d30470e871f3736d0b6f9da3d45" Jan 23 12:19:01 crc kubenswrapper[4865]: I0123 12:19:01.641009 4865 generic.go:334] "Generic (PLEG): container finished" podID="078d8d59-03bd-49c7-a460-c38313812bac" containerID="10310ec6a83b9a79f99d2fa0eecfae0c1493d431a7fa6c41e576d615b4b3a632" exitCode=0 Jan 23 12:19:01 crc kubenswrapper[4865]: I0123 12:19:01.641092 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" event={"ID":"078d8d59-03bd-49c7-a460-c38313812bac","Type":"ContainerDied","Data":"10310ec6a83b9a79f99d2fa0eecfae0c1493d431a7fa6c41e576d615b4b3a632"} Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.087057 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.245026 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5hhm\" (UniqueName: \"kubernetes.io/projected/078d8d59-03bd-49c7-a460-c38313812bac-kube-api-access-v5hhm\") pod \"078d8d59-03bd-49c7-a460-c38313812bac\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.245097 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-inventory\") pod \"078d8d59-03bd-49c7-a460-c38313812bac\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.245119 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-repo-setup-combined-ca-bundle\") pod \"078d8d59-03bd-49c7-a460-c38313812bac\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.245216 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-ssh-key-openstack-edpm-ipam\") pod \"078d8d59-03bd-49c7-a460-c38313812bac\" (UID: \"078d8d59-03bd-49c7-a460-c38313812bac\") " Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.252016 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "078d8d59-03bd-49c7-a460-c38313812bac" (UID: "078d8d59-03bd-49c7-a460-c38313812bac"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.256447 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078d8d59-03bd-49c7-a460-c38313812bac-kube-api-access-v5hhm" (OuterVolumeSpecName: "kube-api-access-v5hhm") pod "078d8d59-03bd-49c7-a460-c38313812bac" (UID: "078d8d59-03bd-49c7-a460-c38313812bac"). InnerVolumeSpecName "kube-api-access-v5hhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.271687 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "078d8d59-03bd-49c7-a460-c38313812bac" (UID: "078d8d59-03bd-49c7-a460-c38313812bac"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.276311 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-inventory" (OuterVolumeSpecName: "inventory") pod "078d8d59-03bd-49c7-a460-c38313812bac" (UID: "078d8d59-03bd-49c7-a460-c38313812bac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.348096 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.348131 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5hhm\" (UniqueName: \"kubernetes.io/projected/078d8d59-03bd-49c7-a460-c38313812bac-kube-api-access-v5hhm\") on node \"crc\" DevicePath \"\"" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.348145 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.348157 4865 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078d8d59-03bd-49c7-a460-c38313812bac-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.672262 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" event={"ID":"078d8d59-03bd-49c7-a460-c38313812bac","Type":"ContainerDied","Data":"4613467bb468b30581ffcded1db8785e86f4914c1253326eb2198572fb646482"} Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.672324 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4613467bb468b30581ffcded1db8785e86f4914c1253326eb2198572fb646482" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.672414 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mfghp" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.767721 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc"] Jan 23 12:19:03 crc kubenswrapper[4865]: E0123 12:19:03.768386 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078d8d59-03bd-49c7-a460-c38313812bac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.768402 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="078d8d59-03bd-49c7-a460-c38313812bac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.768582 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="078d8d59-03bd-49c7-a460-c38313812bac" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.769201 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.772872 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.773233 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.773280 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.778515 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.792119 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc"] Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.958873 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gf8v\" (UniqueName: \"kubernetes.io/projected/9d51b94c-155e-44d7-9ae8-b03424eed4ca-kube-api-access-4gf8v\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qbpbc\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.959623 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qbpbc\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:03 crc kubenswrapper[4865]: I0123 12:19:03.959812 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qbpbc\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:04 crc kubenswrapper[4865]: I0123 12:19:04.061832 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qbpbc\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:04 crc kubenswrapper[4865]: I0123 12:19:04.061896 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qbpbc\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:04 crc kubenswrapper[4865]: I0123 12:19:04.061974 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gf8v\" (UniqueName: \"kubernetes.io/projected/9d51b94c-155e-44d7-9ae8-b03424eed4ca-kube-api-access-4gf8v\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qbpbc\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:04 crc kubenswrapper[4865]: I0123 12:19:04.066865 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qbpbc\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:04 crc kubenswrapper[4865]: I0123 12:19:04.067150 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qbpbc\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:04 crc kubenswrapper[4865]: I0123 12:19:04.081803 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gf8v\" (UniqueName: \"kubernetes.io/projected/9d51b94c-155e-44d7-9ae8-b03424eed4ca-kube-api-access-4gf8v\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-qbpbc\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:04 crc kubenswrapper[4865]: I0123 12:19:04.096483 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:04 crc kubenswrapper[4865]: I0123 12:19:04.455055 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc"] Jan 23 12:19:04 crc kubenswrapper[4865]: I0123 12:19:04.681145 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" event={"ID":"9d51b94c-155e-44d7-9ae8-b03424eed4ca","Type":"ContainerStarted","Data":"736ada9fd2302e367e5a2ac0b598023bbadb4df0e4e770d9df239ca7ebed2dd3"} Jan 23 12:19:05 crc kubenswrapper[4865]: I0123 12:19:05.691255 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" event={"ID":"9d51b94c-155e-44d7-9ae8-b03424eed4ca","Type":"ContainerStarted","Data":"d6e0f90c730500dc20e31aebebec05a8f48e60b0e782f07223704742b6b50e32"} Jan 23 12:19:05 crc kubenswrapper[4865]: I0123 12:19:05.715042 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" podStartSLOduration=2.280968988 podStartE2EDuration="2.715015541s" podCreationTimestamp="2026-01-23 12:19:03 +0000 UTC" firstStartedPulling="2026-01-23 12:19:04.464314519 +0000 UTC m=+1588.633386745" lastFinishedPulling="2026-01-23 12:19:04.898361072 +0000 UTC m=+1589.067433298" observedRunningTime="2026-01-23 12:19:05.707960659 +0000 UTC m=+1589.877032895" watchObservedRunningTime="2026-01-23 12:19:05.715015541 +0000 UTC m=+1589.884087777" Jan 23 12:19:07 crc kubenswrapper[4865]: I0123 12:19:07.712211 4865 generic.go:334] "Generic (PLEG): container finished" podID="9d51b94c-155e-44d7-9ae8-b03424eed4ca" containerID="d6e0f90c730500dc20e31aebebec05a8f48e60b0e782f07223704742b6b50e32" exitCode=0 Jan 23 12:19:07 crc kubenswrapper[4865]: I0123 12:19:07.712276 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" event={"ID":"9d51b94c-155e-44d7-9ae8-b03424eed4ca","Type":"ContainerDied","Data":"d6e0f90c730500dc20e31aebebec05a8f48e60b0e782f07223704742b6b50e32"} Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.095240 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.171430 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gf8v\" (UniqueName: \"kubernetes.io/projected/9d51b94c-155e-44d7-9ae8-b03424eed4ca-kube-api-access-4gf8v\") pod \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.171638 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-ssh-key-openstack-edpm-ipam\") pod \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.171788 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-inventory\") pod \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\" (UID: \"9d51b94c-155e-44d7-9ae8-b03424eed4ca\") " Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.188533 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d51b94c-155e-44d7-9ae8-b03424eed4ca-kube-api-access-4gf8v" (OuterVolumeSpecName: "kube-api-access-4gf8v") pod "9d51b94c-155e-44d7-9ae8-b03424eed4ca" (UID: "9d51b94c-155e-44d7-9ae8-b03424eed4ca"). InnerVolumeSpecName "kube-api-access-4gf8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.196726 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9d51b94c-155e-44d7-9ae8-b03424eed4ca" (UID: "9d51b94c-155e-44d7-9ae8-b03424eed4ca"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.197718 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-inventory" (OuterVolumeSpecName: "inventory") pod "9d51b94c-155e-44d7-9ae8-b03424eed4ca" (UID: "9d51b94c-155e-44d7-9ae8-b03424eed4ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.273432 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.273495 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gf8v\" (UniqueName: \"kubernetes.io/projected/9d51b94c-155e-44d7-9ae8-b03424eed4ca-kube-api-access-4gf8v\") on node \"crc\" DevicePath \"\"" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.274004 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d51b94c-155e-44d7-9ae8-b03424eed4ca-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.736958 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" event={"ID":"9d51b94c-155e-44d7-9ae8-b03424eed4ca","Type":"ContainerDied","Data":"736ada9fd2302e367e5a2ac0b598023bbadb4df0e4e770d9df239ca7ebed2dd3"} Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.737018 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-qbpbc" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.737023 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="736ada9fd2302e367e5a2ac0b598023bbadb4df0e4e770d9df239ca7ebed2dd3" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.825182 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn"] Jan 23 12:19:09 crc kubenswrapper[4865]: E0123 12:19:09.825555 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d51b94c-155e-44d7-9ae8-b03424eed4ca" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.825572 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d51b94c-155e-44d7-9ae8-b03424eed4ca" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.825998 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d51b94c-155e-44d7-9ae8-b03424eed4ca" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.826565 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.829977 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn"] Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.849504 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.849709 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.849732 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.849876 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.887617 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.887688 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.887751 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.887796 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhrls\" (UniqueName: \"kubernetes.io/projected/4d9ae425-cbae-4264-9462-76e79ab2d23e-kube-api-access-vhrls\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.989286 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.989630 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhrls\" (UniqueName: \"kubernetes.io/projected/4d9ae425-cbae-4264-9462-76e79ab2d23e-kube-api-access-vhrls\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.989693 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.989736 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.994512 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:09 crc kubenswrapper[4865]: I0123 12:19:09.995310 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:10 crc kubenswrapper[4865]: I0123 12:19:10.005909 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:10 crc kubenswrapper[4865]: I0123 12:19:10.033499 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhrls\" (UniqueName: \"kubernetes.io/projected/4d9ae425-cbae-4264-9462-76e79ab2d23e-kube-api-access-vhrls\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:10 crc kubenswrapper[4865]: I0123 12:19:10.166300 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:19:10 crc kubenswrapper[4865]: I0123 12:19:10.713656 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn"] Jan 23 12:19:10 crc kubenswrapper[4865]: I0123 12:19:10.746219 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" event={"ID":"4d9ae425-cbae-4264-9462-76e79ab2d23e","Type":"ContainerStarted","Data":"41e7dcac171bf5afc3ce010691dcf572d3d423311897aa1cf54979abcb59cedb"} Jan 23 12:19:11 crc kubenswrapper[4865]: I0123 12:19:11.767636 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" event={"ID":"4d9ae425-cbae-4264-9462-76e79ab2d23e","Type":"ContainerStarted","Data":"0cb4dfa3494b2a37e059770881ea8236f5b64ab6220f9ef3c3e8de4e77580c90"} Jan 23 12:19:11 crc kubenswrapper[4865]: I0123 12:19:11.796710 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" podStartSLOduration=2.322156559 podStartE2EDuration="2.796686925s" podCreationTimestamp="2026-01-23 12:19:09 +0000 UTC" firstStartedPulling="2026-01-23 12:19:10.719221998 +0000 UTC m=+1594.888294224" lastFinishedPulling="2026-01-23 12:19:11.193752364 +0000 UTC m=+1595.362824590" observedRunningTime="2026-01-23 12:19:11.783334781 +0000 UTC m=+1595.952407017" watchObservedRunningTime="2026-01-23 12:19:11.796686925 +0000 UTC m=+1595.965759151" Jan 23 12:19:48 crc kubenswrapper[4865]: I0123 12:19:48.776085 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:19:48 crc kubenswrapper[4865]: I0123 12:19:48.776741 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:19:51 crc kubenswrapper[4865]: I0123 12:19:51.424989 4865 scope.go:117] "RemoveContainer" containerID="2fb54669295550208e9eceb50639242114d444da38c9891c108deb2c3b3c6f45" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.045700 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-lpvc6"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.054395 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-lpvc6"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.063666 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-v6mpl"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.074126 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bwhj2"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.083237 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-2d08-account-create-update-bc5fq"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.090450 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-fe22-account-create-update-g9dt9"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.098027 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-v6mpl"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.106108 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-2d08-account-create-update-bc5fq"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.114750 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-fe22-account-create-update-g9dt9"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.121894 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bwhj2"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.307134 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2bxr2"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.309538 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.329611 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2bxr2"] Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.486840 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-catalog-content\") pod \"redhat-operators-2bxr2\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.487135 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-utilities\") pod \"redhat-operators-2bxr2\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.487245 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66l8\" (UniqueName: \"kubernetes.io/projected/a1594c36-67eb-4195-8e6b-f72e2ddf140e-kube-api-access-t66l8\") pod \"redhat-operators-2bxr2\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.589285 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-utilities\") pod \"redhat-operators-2bxr2\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.589352 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t66l8\" (UniqueName: \"kubernetes.io/projected/a1594c36-67eb-4195-8e6b-f72e2ddf140e-kube-api-access-t66l8\") pod \"redhat-operators-2bxr2\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.589482 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-catalog-content\") pod \"redhat-operators-2bxr2\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.590199 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-utilities\") pod \"redhat-operators-2bxr2\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.590272 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-catalog-content\") pod \"redhat-operators-2bxr2\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.611406 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t66l8\" (UniqueName: \"kubernetes.io/projected/a1594c36-67eb-4195-8e6b-f72e2ddf140e-kube-api-access-t66l8\") pod \"redhat-operators-2bxr2\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:55 crc kubenswrapper[4865]: I0123 12:19:55.695868 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:19:56 crc kubenswrapper[4865]: I0123 12:19:56.030468 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0d2a-account-create-update-ht6rp"] Jan 23 12:19:56 crc kubenswrapper[4865]: I0123 12:19:56.044937 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0d2a-account-create-update-ht6rp"] Jan 23 12:19:56 crc kubenswrapper[4865]: I0123 12:19:56.127691 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eec7ea5-0436-42dc-b49a-d7d9a902977b" path="/var/lib/kubelet/pods/0eec7ea5-0436-42dc-b49a-d7d9a902977b/volumes" Jan 23 12:19:56 crc kubenswrapper[4865]: I0123 12:19:56.128722 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="182030e3-73bd-492b-b070-7299395fd9e8" path="/var/lib/kubelet/pods/182030e3-73bd-492b-b070-7299395fd9e8/volumes" Jan 23 12:19:56 crc kubenswrapper[4865]: I0123 12:19:56.129432 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40b36caf-c4a8-4b90-adf0-94f77019d3aa" path="/var/lib/kubelet/pods/40b36caf-c4a8-4b90-adf0-94f77019d3aa/volumes" Jan 23 12:19:56 crc kubenswrapper[4865]: I0123 12:19:56.130340 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75a3a4e3-ba70-426a-abfc-6b8fd4c76632" path="/var/lib/kubelet/pods/75a3a4e3-ba70-426a-abfc-6b8fd4c76632/volumes" Jan 23 12:19:56 crc kubenswrapper[4865]: I0123 12:19:56.131686 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8104e0dd-89be-4d8f-a300-8c321e2959d0" path="/var/lib/kubelet/pods/8104e0dd-89be-4d8f-a300-8c321e2959d0/volumes" Jan 23 12:19:56 crc kubenswrapper[4865]: I0123 12:19:56.132276 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8389f4ae-eeb3-4dbf-ada2-14a152755af1" path="/var/lib/kubelet/pods/8389f4ae-eeb3-4dbf-ada2-14a152755af1/volumes" Jan 23 12:19:56 crc kubenswrapper[4865]: I0123 12:19:56.169148 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2bxr2"] Jan 23 12:19:56 crc kubenswrapper[4865]: I0123 12:19:56.206134 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bxr2" event={"ID":"a1594c36-67eb-4195-8e6b-f72e2ddf140e","Type":"ContainerStarted","Data":"dce9792018cc1ceb3db245bb7a0b3db034f87f5a4d47825df01c092ed8d59b01"} Jan 23 12:19:57 crc kubenswrapper[4865]: I0123 12:19:57.218568 4865 generic.go:334] "Generic (PLEG): container finished" podID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerID="3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7" exitCode=0 Jan 23 12:19:57 crc kubenswrapper[4865]: I0123 12:19:57.218733 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bxr2" event={"ID":"a1594c36-67eb-4195-8e6b-f72e2ddf140e","Type":"ContainerDied","Data":"3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7"} Jan 23 12:19:58 crc kubenswrapper[4865]: I0123 12:19:58.228337 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bxr2" event={"ID":"a1594c36-67eb-4195-8e6b-f72e2ddf140e","Type":"ContainerStarted","Data":"692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678"} Jan 23 12:20:03 crc kubenswrapper[4865]: I0123 12:20:03.270622 4865 generic.go:334] "Generic (PLEG): container finished" podID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerID="692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678" exitCode=0 Jan 23 12:20:03 crc kubenswrapper[4865]: I0123 12:20:03.270703 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bxr2" event={"ID":"a1594c36-67eb-4195-8e6b-f72e2ddf140e","Type":"ContainerDied","Data":"692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678"} Jan 23 12:20:04 crc kubenswrapper[4865]: I0123 12:20:04.281887 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bxr2" event={"ID":"a1594c36-67eb-4195-8e6b-f72e2ddf140e","Type":"ContainerStarted","Data":"fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339"} Jan 23 12:20:04 crc kubenswrapper[4865]: I0123 12:20:04.304457 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2bxr2" podStartSLOduration=2.870724098 podStartE2EDuration="9.304437556s" podCreationTimestamp="2026-01-23 12:19:55 +0000 UTC" firstStartedPulling="2026-01-23 12:19:57.220156332 +0000 UTC m=+1641.389228568" lastFinishedPulling="2026-01-23 12:20:03.6538698 +0000 UTC m=+1647.822942026" observedRunningTime="2026-01-23 12:20:04.298238636 +0000 UTC m=+1648.467310882" watchObservedRunningTime="2026-01-23 12:20:04.304437556 +0000 UTC m=+1648.473509782" Jan 23 12:20:05 crc kubenswrapper[4865]: I0123 12:20:05.696040 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:20:05 crc kubenswrapper[4865]: I0123 12:20:05.696292 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:20:06 crc kubenswrapper[4865]: I0123 12:20:06.740644 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2bxr2" podUID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerName="registry-server" probeResult="failure" output=< Jan 23 12:20:06 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:20:06 crc kubenswrapper[4865]: > Jan 23 12:20:15 crc kubenswrapper[4865]: I0123 12:20:15.742466 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:20:15 crc kubenswrapper[4865]: I0123 12:20:15.792724 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:20:16 crc kubenswrapper[4865]: I0123 12:20:16.018550 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2bxr2"] Jan 23 12:20:17 crc kubenswrapper[4865]: I0123 12:20:17.066446 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vfxkd"] Jan 23 12:20:17 crc kubenswrapper[4865]: I0123 12:20:17.081427 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vfxkd"] Jan 23 12:20:17 crc kubenswrapper[4865]: I0123 12:20:17.424649 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2bxr2" podUID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerName="registry-server" containerID="cri-o://fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339" gracePeriod=2 Jan 23 12:20:17 crc kubenswrapper[4865]: I0123 12:20:17.933737 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:20:17 crc kubenswrapper[4865]: I0123 12:20:17.940700 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-utilities\") pod \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " Jan 23 12:20:17 crc kubenswrapper[4865]: I0123 12:20:17.940815 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-catalog-content\") pod \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " Jan 23 12:20:17 crc kubenswrapper[4865]: I0123 12:20:17.940929 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t66l8\" (UniqueName: \"kubernetes.io/projected/a1594c36-67eb-4195-8e6b-f72e2ddf140e-kube-api-access-t66l8\") pod \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\" (UID: \"a1594c36-67eb-4195-8e6b-f72e2ddf140e\") " Jan 23 12:20:17 crc kubenswrapper[4865]: I0123 12:20:17.941509 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-utilities" (OuterVolumeSpecName: "utilities") pod "a1594c36-67eb-4195-8e6b-f72e2ddf140e" (UID: "a1594c36-67eb-4195-8e6b-f72e2ddf140e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:20:17 crc kubenswrapper[4865]: I0123 12:20:17.942323 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:20:17 crc kubenswrapper[4865]: I0123 12:20:17.947031 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1594c36-67eb-4195-8e6b-f72e2ddf140e-kube-api-access-t66l8" (OuterVolumeSpecName: "kube-api-access-t66l8") pod "a1594c36-67eb-4195-8e6b-f72e2ddf140e" (UID: "a1594c36-67eb-4195-8e6b-f72e2ddf140e"). InnerVolumeSpecName "kube-api-access-t66l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.043795 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t66l8\" (UniqueName: \"kubernetes.io/projected/a1594c36-67eb-4195-8e6b-f72e2ddf140e-kube-api-access-t66l8\") on node \"crc\" DevicePath \"\"" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.060463 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1594c36-67eb-4195-8e6b-f72e2ddf140e" (UID: "a1594c36-67eb-4195-8e6b-f72e2ddf140e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.128898 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9788e5dc-7889-456b-934c-09bf3aa01f25" path="/var/lib/kubelet/pods/9788e5dc-7889-456b-934c-09bf3aa01f25/volumes" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.145953 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1594c36-67eb-4195-8e6b-f72e2ddf140e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.436063 4865 generic.go:334] "Generic (PLEG): container finished" podID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerID="fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339" exitCode=0 Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.436182 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bxr2" event={"ID":"a1594c36-67eb-4195-8e6b-f72e2ddf140e","Type":"ContainerDied","Data":"fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339"} Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.436213 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bxr2" event={"ID":"a1594c36-67eb-4195-8e6b-f72e2ddf140e","Type":"ContainerDied","Data":"dce9792018cc1ceb3db245bb7a0b3db034f87f5a4d47825df01c092ed8d59b01"} Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.436235 4865 scope.go:117] "RemoveContainer" containerID="fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.436463 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bxr2" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.469196 4865 scope.go:117] "RemoveContainer" containerID="692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.471332 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2bxr2"] Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.482418 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2bxr2"] Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.490896 4865 scope.go:117] "RemoveContainer" containerID="3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.545849 4865 scope.go:117] "RemoveContainer" containerID="fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339" Jan 23 12:20:18 crc kubenswrapper[4865]: E0123 12:20:18.546784 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339\": container with ID starting with fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339 not found: ID does not exist" containerID="fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.546896 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339"} err="failed to get container status \"fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339\": rpc error: code = NotFound desc = could not find container \"fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339\": container with ID starting with fdb995be48b226e2aece7215f641b4e3536f6ec7aeab77091891b020a4cae339 not found: ID does not exist" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.547007 4865 scope.go:117] "RemoveContainer" containerID="692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678" Jan 23 12:20:18 crc kubenswrapper[4865]: E0123 12:20:18.555936 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678\": container with ID starting with 692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678 not found: ID does not exist" containerID="692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.556010 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678"} err="failed to get container status \"692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678\": rpc error: code = NotFound desc = could not find container \"692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678\": container with ID starting with 692bfa084e2d3021b499a56c88e78dcdf392dbb7c0ecfa93225d735ea71c6678 not found: ID does not exist" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.556044 4865 scope.go:117] "RemoveContainer" containerID="3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7" Jan 23 12:20:18 crc kubenswrapper[4865]: E0123 12:20:18.556907 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7\": container with ID starting with 3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7 not found: ID does not exist" containerID="3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.556960 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7"} err="failed to get container status \"3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7\": rpc error: code = NotFound desc = could not find container \"3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7\": container with ID starting with 3cb13396b6b7ec6ddd98fdc335a0231e8f8c47d0ef70b6126aa636fae6114ff7 not found: ID does not exist" Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.776612 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:20:18 crc kubenswrapper[4865]: I0123 12:20:18.776859 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:20:20 crc kubenswrapper[4865]: I0123 12:20:20.128443 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" path="/var/lib/kubelet/pods/a1594c36-67eb-4195-8e6b-f72e2ddf140e/volumes" Jan 23 12:20:47 crc kubenswrapper[4865]: I0123 12:20:47.045707 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xcpzg"] Jan 23 12:20:47 crc kubenswrapper[4865]: I0123 12:20:47.054882 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xcpzg"] Jan 23 12:20:48 crc kubenswrapper[4865]: I0123 12:20:48.132181 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f6099c-c8bb-4dfd-83ab-8b1084df2aee" path="/var/lib/kubelet/pods/e2f6099c-c8bb-4dfd-83ab-8b1084df2aee/volumes" Jan 23 12:20:48 crc kubenswrapper[4865]: I0123 12:20:48.776716 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:20:48 crc kubenswrapper[4865]: I0123 12:20:48.776793 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:20:48 crc kubenswrapper[4865]: I0123 12:20:48.776849 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:20:48 crc kubenswrapper[4865]: I0123 12:20:48.777527 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:20:48 crc kubenswrapper[4865]: I0123 12:20:48.777625 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" gracePeriod=600 Jan 23 12:20:48 crc kubenswrapper[4865]: E0123 12:20:48.905990 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:20:49 crc kubenswrapper[4865]: I0123 12:20:49.781085 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" exitCode=0 Jan 23 12:20:49 crc kubenswrapper[4865]: I0123 12:20:49.781129 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725"} Jan 23 12:20:49 crc kubenswrapper[4865]: I0123 12:20:49.781173 4865 scope.go:117] "RemoveContainer" containerID="f27c5e1f3f822d3f73db149902949b4aa2098b5ef3e947246d94e8825258d08b" Jan 23 12:20:49 crc kubenswrapper[4865]: I0123 12:20:49.782135 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:20:49 crc kubenswrapper[4865]: E0123 12:20:49.782750 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:20:51 crc kubenswrapper[4865]: I0123 12:20:51.510736 4865 scope.go:117] "RemoveContainer" containerID="ad7026dd1880909fa3617e0bc706c84f44e7de2456c653ffc29c3bcdcd365689" Jan 23 12:20:51 crc kubenswrapper[4865]: I0123 12:20:51.540825 4865 scope.go:117] "RemoveContainer" containerID="16bfbe2f2641d69d6007aacd6b4dca44387f827b19b8327fa768d75e9a950f1f" Jan 23 12:20:51 crc kubenswrapper[4865]: I0123 12:20:51.590561 4865 scope.go:117] "RemoveContainer" containerID="6a66ff237f4fa45b3b1217619406d39eb8ac7226e40a83444ce20bfeb6ed1558" Jan 23 12:20:51 crc kubenswrapper[4865]: I0123 12:20:51.628172 4865 scope.go:117] "RemoveContainer" containerID="a418635a52a035cd40dd57bc81964a1e82f01a8fb34505a5e02edb5c09b512c0" Jan 23 12:20:51 crc kubenswrapper[4865]: I0123 12:20:51.670563 4865 scope.go:117] "RemoveContainer" containerID="e3bef69dadbabe9f19c71d3355aeac110bd7ef5cd66a34df8d9ace106f3b8e30" Jan 23 12:20:51 crc kubenswrapper[4865]: I0123 12:20:51.715346 4865 scope.go:117] "RemoveContainer" containerID="68bf6361cff1b404a4396ebd4214093c58a4075ecd4e5820e8b087e73e2283c2" Jan 23 12:20:51 crc kubenswrapper[4865]: I0123 12:20:51.757442 4865 scope.go:117] "RemoveContainer" containerID="9796ce4ffa86d8949c0573457f35eb5a11dcb5523558b2a1db5dadf190e8f6be" Jan 23 12:20:51 crc kubenswrapper[4865]: I0123 12:20:51.778114 4865 scope.go:117] "RemoveContainer" containerID="10db599a72e80c6ade55df3c27729fa059210304b61b76f08c80551517e01dce" Jan 23 12:20:58 crc kubenswrapper[4865]: I0123 12:20:58.786451 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4klph"] Jan 23 12:20:58 crc kubenswrapper[4865]: E0123 12:20:58.787497 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerName="extract-utilities" Jan 23 12:20:58 crc kubenswrapper[4865]: I0123 12:20:58.787523 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerName="extract-utilities" Jan 23 12:20:58 crc kubenswrapper[4865]: E0123 12:20:58.787552 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerName="extract-content" Jan 23 12:20:58 crc kubenswrapper[4865]: I0123 12:20:58.787563 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerName="extract-content" Jan 23 12:20:58 crc kubenswrapper[4865]: E0123 12:20:58.787581 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerName="registry-server" Jan 23 12:20:58 crc kubenswrapper[4865]: I0123 12:20:58.787593 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerName="registry-server" Jan 23 12:20:58 crc kubenswrapper[4865]: I0123 12:20:58.787966 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1594c36-67eb-4195-8e6b-f72e2ddf140e" containerName="registry-server" Jan 23 12:20:58 crc kubenswrapper[4865]: I0123 12:20:58.790414 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:58 crc kubenswrapper[4865]: I0123 12:20:58.798432 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4klph"] Jan 23 12:20:58 crc kubenswrapper[4865]: I0123 12:20:58.957714 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-utilities\") pod \"redhat-marketplace-4klph\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:58 crc kubenswrapper[4865]: I0123 12:20:58.957821 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bd89\" (UniqueName: \"kubernetes.io/projected/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-kube-api-access-6bd89\") pod \"redhat-marketplace-4klph\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:58 crc kubenswrapper[4865]: I0123 12:20:58.957855 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-catalog-content\") pod \"redhat-marketplace-4klph\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.029686 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-ae31-account-create-update-m7fmp"] Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.038305 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2079-account-create-update-xtn8h"] Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.047045 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2079-account-create-update-xtn8h"] Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.056196 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-ae31-account-create-update-m7fmp"] Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.059794 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-utilities\") pod \"redhat-marketplace-4klph\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.059921 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bd89\" (UniqueName: \"kubernetes.io/projected/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-kube-api-access-6bd89\") pod \"redhat-marketplace-4klph\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.059971 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-catalog-content\") pod \"redhat-marketplace-4klph\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.060361 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-utilities\") pod \"redhat-marketplace-4klph\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.060498 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-catalog-content\") pod \"redhat-marketplace-4klph\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.084223 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bd89\" (UniqueName: \"kubernetes.io/projected/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-kube-api-access-6bd89\") pod \"redhat-marketplace-4klph\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.149084 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:20:59 crc kubenswrapper[4865]: I0123 12:20:59.915667 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4klph"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.045252 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-27ee-account-create-update-twq6z"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.061786 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-glvsw"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.071964 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-27ee-account-create-update-twq6z"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.082618 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-l2bdc"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.091842 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-glvsw"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.100297 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-bg7bm"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.108590 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4c64-account-create-update-q7s48"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.116149 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-l2bdc"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.130733 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47bf79e7-dbca-4568-877d-82d13222755e" path="/var/lib/kubelet/pods/47bf79e7-dbca-4568-877d-82d13222755e/volumes" Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.131666 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de0053c-5a7e-4b59-a93a-90a9073cfa30" path="/var/lib/kubelet/pods/5de0053c-5a7e-4b59-a93a-90a9073cfa30/volumes" Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.132592 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e83a6fe-4aab-44f6-a5c3-1d2afd376278" path="/var/lib/kubelet/pods/5e83a6fe-4aab-44f6-a5c3-1d2afd376278/volumes" Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.133456 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af312aba-bce9-4e7c-a761-8ab57e0bb3e3" path="/var/lib/kubelet/pods/af312aba-bce9-4e7c-a761-8ab57e0bb3e3/volumes" Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.135114 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4" path="/var/lib/kubelet/pods/e9c2b9c6-3e88-4f03-8e72-ebc3f20052c4/volumes" Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.135985 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4c64-account-create-update-q7s48"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.136021 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-mhlkc"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.137445 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-bg7bm"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.144187 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-mhlkc"] Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.902232 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4klph" event={"ID":"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8","Type":"ContainerStarted","Data":"acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e"} Jan 23 12:21:00 crc kubenswrapper[4865]: I0123 12:21:00.902273 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4klph" event={"ID":"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8","Type":"ContainerStarted","Data":"29dbb67fe58eabbb1dc19922721de8a4401720c9a388994b8df9c4ef83f2b646"} Jan 23 12:21:01 crc kubenswrapper[4865]: I0123 12:21:01.911676 4865 generic.go:334] "Generic (PLEG): container finished" podID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerID="acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e" exitCode=0 Jan 23 12:21:01 crc kubenswrapper[4865]: I0123 12:21:01.911741 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4klph" event={"ID":"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8","Type":"ContainerDied","Data":"acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e"} Jan 23 12:21:02 crc kubenswrapper[4865]: I0123 12:21:02.138021 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5ba781-e3a4-4458-918c-816f636b14bf" path="/var/lib/kubelet/pods/0a5ba781-e3a4-4458-918c-816f636b14bf/volumes" Jan 23 12:21:02 crc kubenswrapper[4865]: I0123 12:21:02.139576 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81fbdceb-1c27-4507-8317-fa5b8e427716" path="/var/lib/kubelet/pods/81fbdceb-1c27-4507-8317-fa5b8e427716/volumes" Jan 23 12:21:02 crc kubenswrapper[4865]: I0123 12:21:02.142543 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86e93d72-830a-4415-b23a-91c49115233f" path="/var/lib/kubelet/pods/86e93d72-830a-4415-b23a-91c49115233f/volumes" Jan 23 12:21:04 crc kubenswrapper[4865]: I0123 12:21:04.042717 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-g94f9"] Jan 23 12:21:04 crc kubenswrapper[4865]: I0123 12:21:04.052823 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-g94f9"] Jan 23 12:21:04 crc kubenswrapper[4865]: I0123 12:21:04.118849 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:21:04 crc kubenswrapper[4865]: E0123 12:21:04.119093 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:21:04 crc kubenswrapper[4865]: I0123 12:21:04.136566 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6" path="/var/lib/kubelet/pods/d9dbd2af-b7e3-40cb-9b53-527c5a03a3b6/volumes" Jan 23 12:21:04 crc kubenswrapper[4865]: I0123 12:21:04.938085 4865 generic.go:334] "Generic (PLEG): container finished" podID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerID="f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022" exitCode=0 Jan 23 12:21:04 crc kubenswrapper[4865]: I0123 12:21:04.938130 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4klph" event={"ID":"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8","Type":"ContainerDied","Data":"f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022"} Jan 23 12:21:06 crc kubenswrapper[4865]: I0123 12:21:06.965276 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4klph" event={"ID":"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8","Type":"ContainerStarted","Data":"c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc"} Jan 23 12:21:06 crc kubenswrapper[4865]: I0123 12:21:06.990437 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4klph" podStartSLOduration=4.424852262 podStartE2EDuration="8.990420867s" podCreationTimestamp="2026-01-23 12:20:58 +0000 UTC" firstStartedPulling="2026-01-23 12:21:01.913871996 +0000 UTC m=+1706.082944222" lastFinishedPulling="2026-01-23 12:21:06.479440601 +0000 UTC m=+1710.648512827" observedRunningTime="2026-01-23 12:21:06.986923002 +0000 UTC m=+1711.155995228" watchObservedRunningTime="2026-01-23 12:21:06.990420867 +0000 UTC m=+1711.159493093" Jan 23 12:21:09 crc kubenswrapper[4865]: I0123 12:21:09.149199 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:21:09 crc kubenswrapper[4865]: I0123 12:21:09.149505 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:21:09 crc kubenswrapper[4865]: I0123 12:21:09.193024 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:21:15 crc kubenswrapper[4865]: I0123 12:21:15.122738 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:21:15 crc kubenswrapper[4865]: E0123 12:21:15.125230 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:21:19 crc kubenswrapper[4865]: I0123 12:21:19.200987 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:21:19 crc kubenswrapper[4865]: I0123 12:21:19.255439 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4klph"] Jan 23 12:21:20 crc kubenswrapper[4865]: I0123 12:21:20.075086 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4klph" podUID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerName="registry-server" containerID="cri-o://c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc" gracePeriod=2 Jan 23 12:21:21 crc kubenswrapper[4865]: I0123 12:21:21.662591 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:21:21 crc kubenswrapper[4865]: I0123 12:21:21.732590 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-utilities\") pod \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " Jan 23 12:21:21 crc kubenswrapper[4865]: I0123 12:21:21.732734 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bd89\" (UniqueName: \"kubernetes.io/projected/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-kube-api-access-6bd89\") pod \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " Jan 23 12:21:21 crc kubenswrapper[4865]: I0123 12:21:21.732795 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-catalog-content\") pod \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\" (UID: \"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8\") " Jan 23 12:21:21 crc kubenswrapper[4865]: I0123 12:21:21.733898 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-utilities" (OuterVolumeSpecName: "utilities") pod "2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" (UID: "2ad3e01d-215d-414a-b0d5-6ac3d2738cb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:21:21 crc kubenswrapper[4865]: I0123 12:21:21.741215 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-kube-api-access-6bd89" (OuterVolumeSpecName: "kube-api-access-6bd89") pod "2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" (UID: "2ad3e01d-215d-414a-b0d5-6ac3d2738cb8"). InnerVolumeSpecName "kube-api-access-6bd89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:21:21 crc kubenswrapper[4865]: I0123 12:21:21.754008 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" (UID: "2ad3e01d-215d-414a-b0d5-6ac3d2738cb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:21:21 crc kubenswrapper[4865]: I0123 12:21:21.834734 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:21:21 crc kubenswrapper[4865]: I0123 12:21:21.834778 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bd89\" (UniqueName: \"kubernetes.io/projected/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-kube-api-access-6bd89\") on node \"crc\" DevicePath \"\"" Jan 23 12:21:21 crc kubenswrapper[4865]: I0123 12:21:21.834795 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.098751 4865 generic.go:334] "Generic (PLEG): container finished" podID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerID="c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc" exitCode=0 Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.099256 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4klph" event={"ID":"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8","Type":"ContainerDied","Data":"c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc"} Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.099446 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4klph" event={"ID":"2ad3e01d-215d-414a-b0d5-6ac3d2738cb8","Type":"ContainerDied","Data":"29dbb67fe58eabbb1dc19922721de8a4401720c9a388994b8df9c4ef83f2b646"} Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.099656 4865 scope.go:117] "RemoveContainer" containerID="c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.100076 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4klph" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.121481 4865 scope.go:117] "RemoveContainer" containerID="f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.149955 4865 scope.go:117] "RemoveContainer" containerID="acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.216741 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4klph"] Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.221103 4865 scope.go:117] "RemoveContainer" containerID="c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.223492 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4klph"] Jan 23 12:21:22 crc kubenswrapper[4865]: E0123 12:21:22.226419 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc\": container with ID starting with c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc not found: ID does not exist" containerID="c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.226522 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc"} err="failed to get container status \"c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc\": rpc error: code = NotFound desc = could not find container \"c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc\": container with ID starting with c615c0d941f1aff7d211b64ef157d197e3778eb4e8e97cbb414cdfa72e0d18cc not found: ID does not exist" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.226886 4865 scope.go:117] "RemoveContainer" containerID="f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022" Jan 23 12:21:22 crc kubenswrapper[4865]: E0123 12:21:22.227483 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022\": container with ID starting with f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022 not found: ID does not exist" containerID="f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.227547 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022"} err="failed to get container status \"f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022\": rpc error: code = NotFound desc = could not find container \"f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022\": container with ID starting with f7c0af963c51afd84e2a04a43ad5e1c9d08c0d9f15ff3a8bedd49991fcb64022 not found: ID does not exist" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.227567 4865 scope.go:117] "RemoveContainer" containerID="acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e" Jan 23 12:21:22 crc kubenswrapper[4865]: E0123 12:21:22.227905 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e\": container with ID starting with acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e not found: ID does not exist" containerID="acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e" Jan 23 12:21:22 crc kubenswrapper[4865]: I0123 12:21:22.227994 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e"} err="failed to get container status \"acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e\": rpc error: code = NotFound desc = could not find container \"acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e\": container with ID starting with acee5c2f0ee0d44d2715b0477de196462be06114dc6a04a2cf6c35a7fe37472e not found: ID does not exist" Jan 23 12:21:24 crc kubenswrapper[4865]: I0123 12:21:24.129091 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" path="/var/lib/kubelet/pods/2ad3e01d-215d-414a-b0d5-6ac3d2738cb8/volumes" Jan 23 12:21:28 crc kubenswrapper[4865]: I0123 12:21:28.118725 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:21:28 crc kubenswrapper[4865]: E0123 12:21:28.119513 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:21:39 crc kubenswrapper[4865]: I0123 12:21:39.118765 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:21:39 crc kubenswrapper[4865]: E0123 12:21:39.119466 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:21:51 crc kubenswrapper[4865]: I0123 12:21:51.951517 4865 scope.go:117] "RemoveContainer" containerID="e3d3312765c37a25a5186fca3d2a2b049e7bc843cd0e02a2e10a5d1a07eb5fcb" Jan 23 12:21:51 crc kubenswrapper[4865]: I0123 12:21:51.981829 4865 scope.go:117] "RemoveContainer" containerID="d7078b3515259ead102e0e234fafac976acfe2d86ef701cc0d754d20c6af2acf" Jan 23 12:21:52 crc kubenswrapper[4865]: I0123 12:21:52.040537 4865 scope.go:117] "RemoveContainer" containerID="6bf4c5d697c8214e2ebbcfb60e57e22362f8fac5b514a722ae2d275a0d8bf32e" Jan 23 12:21:52 crc kubenswrapper[4865]: I0123 12:21:52.103897 4865 scope.go:117] "RemoveContainer" containerID="b743afdd02cb21c9ec48954c190e17965245585a53e682fd2f335376810b8959" Jan 23 12:21:52 crc kubenswrapper[4865]: I0123 12:21:52.180985 4865 scope.go:117] "RemoveContainer" containerID="dcfe04daeaa1f237711277bc1b564d1c87ff1f304f7c3d5a700227d7806e64c7" Jan 23 12:21:52 crc kubenswrapper[4865]: I0123 12:21:52.253622 4865 scope.go:117] "RemoveContainer" containerID="84bbd32f5e5592885910db180a5c03c622605469887843c4481ac3d99561a1f9" Jan 23 12:21:52 crc kubenswrapper[4865]: I0123 12:21:52.300578 4865 scope.go:117] "RemoveContainer" containerID="11fdab9ab1a6ffa11537d1ac2f703a49501f8ae83d85953bb7bf4497bfd3a75b" Jan 23 12:21:52 crc kubenswrapper[4865]: I0123 12:21:52.357099 4865 scope.go:117] "RemoveContainer" containerID="bef4710816c7f1e139a5517bef0e19f0a983e6265f22dd954056de25dfa1ecd0" Jan 23 12:21:52 crc kubenswrapper[4865]: I0123 12:21:52.402336 4865 scope.go:117] "RemoveContainer" containerID="584221033cd94411f9570ed0a13175f185942a3b4abad2102c446ebd502aa4a2" Jan 23 12:21:53 crc kubenswrapper[4865]: I0123 12:21:53.118404 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:21:53 crc kubenswrapper[4865]: E0123 12:21:53.118785 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:21:57 crc kubenswrapper[4865]: I0123 12:21:57.045434 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-scpxv"] Jan 23 12:21:57 crc kubenswrapper[4865]: I0123 12:21:57.055995 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-scpxv"] Jan 23 12:21:58 crc kubenswrapper[4865]: I0123 12:21:58.037182 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dlxs4"] Jan 23 12:21:58 crc kubenswrapper[4865]: I0123 12:21:58.044519 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dlxs4"] Jan 23 12:21:58 crc kubenswrapper[4865]: I0123 12:21:58.138262 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68c0d50f-01a6-4e5c-92e8-626af12ba85a" path="/var/lib/kubelet/pods/68c0d50f-01a6-4e5c-92e8-626af12ba85a/volumes" Jan 23 12:21:58 crc kubenswrapper[4865]: I0123 12:21:58.140720 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84103afc-63d9-416c-bc51-729cd8c6eeed" path="/var/lib/kubelet/pods/84103afc-63d9-416c-bc51-729cd8c6eeed/volumes" Jan 23 12:22:00 crc kubenswrapper[4865]: I0123 12:22:00.034392 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zntkp"] Jan 23 12:22:00 crc kubenswrapper[4865]: I0123 12:22:00.048563 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zntkp"] Jan 23 12:22:00 crc kubenswrapper[4865]: I0123 12:22:00.130624 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0155ddd7-e729-44e5-b3c9-e18d88d171ef" path="/var/lib/kubelet/pods/0155ddd7-e729-44e5-b3c9-e18d88d171ef/volumes" Jan 23 12:22:04 crc kubenswrapper[4865]: I0123 12:22:04.118396 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:22:04 crc kubenswrapper[4865]: E0123 12:22:04.118952 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:22:11 crc kubenswrapper[4865]: I0123 12:22:11.045124 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xqdv2"] Jan 23 12:22:11 crc kubenswrapper[4865]: I0123 12:22:11.055242 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xqdv2"] Jan 23 12:22:12 crc kubenswrapper[4865]: I0123 12:22:12.129869 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dbb30bd-db3b-48a2-96dd-6193b6a7ab90" path="/var/lib/kubelet/pods/1dbb30bd-db3b-48a2-96dd-6193b6a7ab90/volumes" Jan 23 12:22:15 crc kubenswrapper[4865]: I0123 12:22:15.060977 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-jw9z7"] Jan 23 12:22:15 crc kubenswrapper[4865]: I0123 12:22:15.072461 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-jw9z7"] Jan 23 12:22:16 crc kubenswrapper[4865]: I0123 12:22:16.127370 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:22:16 crc kubenswrapper[4865]: E0123 12:22:16.127874 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:22:16 crc kubenswrapper[4865]: I0123 12:22:16.131561 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6117d5-9df1-4299-8358-d7235d7847d2" path="/var/lib/kubelet/pods/3e6117d5-9df1-4299-8358-d7235d7847d2/volumes" Jan 23 12:22:19 crc kubenswrapper[4865]: I0123 12:22:19.032415 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-sxqmn"] Jan 23 12:22:19 crc kubenswrapper[4865]: I0123 12:22:19.040471 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-sxqmn"] Jan 23 12:22:20 crc kubenswrapper[4865]: I0123 12:22:20.135878 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afab83a5-8e47-4531-80de-ae69dfd11bd9" path="/var/lib/kubelet/pods/afab83a5-8e47-4531-80de-ae69dfd11bd9/volumes" Jan 23 12:22:27 crc kubenswrapper[4865]: I0123 12:22:27.118674 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:22:27 crc kubenswrapper[4865]: E0123 12:22:27.119288 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:22:40 crc kubenswrapper[4865]: I0123 12:22:40.119087 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:22:40 crc kubenswrapper[4865]: E0123 12:22:40.120262 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:22:52 crc kubenswrapper[4865]: I0123 12:22:52.632881 4865 scope.go:117] "RemoveContainer" containerID="fb4e00a76b2e2eb4ed64c98986b511131a2e4fa10c3384f693c9bd331275ac7d" Jan 23 12:22:52 crc kubenswrapper[4865]: I0123 12:22:52.665066 4865 scope.go:117] "RemoveContainer" containerID="d540d33b36c5adf562161dadb0bcd930ee1137ee4310220b513fed962a09963d" Jan 23 12:22:52 crc kubenswrapper[4865]: I0123 12:22:52.702776 4865 scope.go:117] "RemoveContainer" containerID="d2105a12a7447eed63fffef6151979b3cefd11a2991c7c854b929bf968cf83ab" Jan 23 12:22:52 crc kubenswrapper[4865]: I0123 12:22:52.746780 4865 scope.go:117] "RemoveContainer" containerID="f1d6e4b57940fae2b9d0686009b1bfb552702fbf15d15ea19a28e777ae03b388" Jan 23 12:22:52 crc kubenswrapper[4865]: I0123 12:22:52.791639 4865 scope.go:117] "RemoveContainer" containerID="b739b8de3e8d33658d22d2bd79d15644861637e873f7619c8abb911df38bffde" Jan 23 12:22:52 crc kubenswrapper[4865]: I0123 12:22:52.832620 4865 scope.go:117] "RemoveContainer" containerID="f31df3bdd703f46a94516666fc069364522d00b4d795aa2e1b847e7c2a52a592" Jan 23 12:22:53 crc kubenswrapper[4865]: I0123 12:22:53.050826 4865 generic.go:334] "Generic (PLEG): container finished" podID="4d9ae425-cbae-4264-9462-76e79ab2d23e" containerID="0cb4dfa3494b2a37e059770881ea8236f5b64ab6220f9ef3c3e8de4e77580c90" exitCode=0 Jan 23 12:22:53 crc kubenswrapper[4865]: I0123 12:22:53.050875 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" event={"ID":"4d9ae425-cbae-4264-9462-76e79ab2d23e","Type":"ContainerDied","Data":"0cb4dfa3494b2a37e059770881ea8236f5b64ab6220f9ef3c3e8de4e77580c90"} Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.118105 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:22:54 crc kubenswrapper[4865]: E0123 12:22:54.118368 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.467550 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.550916 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhrls\" (UniqueName: \"kubernetes.io/projected/4d9ae425-cbae-4264-9462-76e79ab2d23e-kube-api-access-vhrls\") pod \"4d9ae425-cbae-4264-9462-76e79ab2d23e\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.550980 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-inventory\") pod \"4d9ae425-cbae-4264-9462-76e79ab2d23e\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.551018 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-bootstrap-combined-ca-bundle\") pod \"4d9ae425-cbae-4264-9462-76e79ab2d23e\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.551100 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-ssh-key-openstack-edpm-ipam\") pod \"4d9ae425-cbae-4264-9462-76e79ab2d23e\" (UID: \"4d9ae425-cbae-4264-9462-76e79ab2d23e\") " Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.558985 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4d9ae425-cbae-4264-9462-76e79ab2d23e" (UID: "4d9ae425-cbae-4264-9462-76e79ab2d23e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.559879 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9ae425-cbae-4264-9462-76e79ab2d23e-kube-api-access-vhrls" (OuterVolumeSpecName: "kube-api-access-vhrls") pod "4d9ae425-cbae-4264-9462-76e79ab2d23e" (UID: "4d9ae425-cbae-4264-9462-76e79ab2d23e"). InnerVolumeSpecName "kube-api-access-vhrls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.581973 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4d9ae425-cbae-4264-9462-76e79ab2d23e" (UID: "4d9ae425-cbae-4264-9462-76e79ab2d23e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.584741 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-inventory" (OuterVolumeSpecName: "inventory") pod "4d9ae425-cbae-4264-9462-76e79ab2d23e" (UID: "4d9ae425-cbae-4264-9462-76e79ab2d23e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.653009 4865 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.653126 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.653183 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhrls\" (UniqueName: \"kubernetes.io/projected/4d9ae425-cbae-4264-9462-76e79ab2d23e-kube-api-access-vhrls\") on node \"crc\" DevicePath \"\"" Jan 23 12:22:54 crc kubenswrapper[4865]: I0123 12:22:54.653276 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4d9ae425-cbae-4264-9462-76e79ab2d23e-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.065993 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" event={"ID":"4d9ae425-cbae-4264-9462-76e79ab2d23e","Type":"ContainerDied","Data":"41e7dcac171bf5afc3ce010691dcf572d3d423311897aa1cf54979abcb59cedb"} Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.066373 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41e7dcac171bf5afc3ce010691dcf572d3d423311897aa1cf54979abcb59cedb" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.066341 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-4m9kn" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.169123 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv"] Jan 23 12:22:55 crc kubenswrapper[4865]: E0123 12:22:55.170475 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerName="extract-utilities" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.170545 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerName="extract-utilities" Jan 23 12:22:55 crc kubenswrapper[4865]: E0123 12:22:55.170618 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9ae425-cbae-4264-9462-76e79ab2d23e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.170672 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9ae425-cbae-4264-9462-76e79ab2d23e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 12:22:55 crc kubenswrapper[4865]: E0123 12:22:55.170767 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerName="extract-content" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.170826 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerName="extract-content" Jan 23 12:22:55 crc kubenswrapper[4865]: E0123 12:22:55.170895 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerName="registry-server" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.170965 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerName="registry-server" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.171187 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad3e01d-215d-414a-b0d5-6ac3d2738cb8" containerName="registry-server" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.171270 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9ae425-cbae-4264-9462-76e79ab2d23e" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.172169 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.176523 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.176466 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.176929 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.177307 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.195751 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv"] Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.265433 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvhln\" (UniqueName: \"kubernetes.io/projected/c32dfe8e-f816-417e-a439-df8d8ecef673-kube-api-access-mvhln\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g85dv\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.265711 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g85dv\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.266227 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g85dv\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.368326 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvhln\" (UniqueName: \"kubernetes.io/projected/c32dfe8e-f816-417e-a439-df8d8ecef673-kube-api-access-mvhln\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g85dv\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.368385 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g85dv\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.368476 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g85dv\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.373926 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g85dv\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.374558 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g85dv\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.384299 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvhln\" (UniqueName: \"kubernetes.io/projected/c32dfe8e-f816-417e-a439-df8d8ecef673-kube-api-access-mvhln\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-g85dv\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:55 crc kubenswrapper[4865]: I0123 12:22:55.496727 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:22:56 crc kubenswrapper[4865]: I0123 12:22:56.077457 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv"] Jan 23 12:22:56 crc kubenswrapper[4865]: I0123 12:22:56.087066 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 12:22:57 crc kubenswrapper[4865]: I0123 12:22:57.085095 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" event={"ID":"c32dfe8e-f816-417e-a439-df8d8ecef673","Type":"ContainerStarted","Data":"5bc22d99b30c19f4ab27eb5e470bb7db08517e3859475907f2461033cb9aebef"} Jan 23 12:22:57 crc kubenswrapper[4865]: I0123 12:22:57.085692 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" event={"ID":"c32dfe8e-f816-417e-a439-df8d8ecef673","Type":"ContainerStarted","Data":"71ae95e3d3e1d71baebf7e588fe0e313f4b3c2cf7fd9b80fc5420a2d8d7c55e8"} Jan 23 12:22:57 crc kubenswrapper[4865]: I0123 12:22:57.106682 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" podStartSLOduration=1.5227502149999999 podStartE2EDuration="2.106658969s" podCreationTimestamp="2026-01-23 12:22:55 +0000 UTC" firstStartedPulling="2026-01-23 12:22:56.086793229 +0000 UTC m=+1820.255865465" lastFinishedPulling="2026-01-23 12:22:56.670701993 +0000 UTC m=+1820.839774219" observedRunningTime="2026-01-23 12:22:57.102430966 +0000 UTC m=+1821.271503202" watchObservedRunningTime="2026-01-23 12:22:57.106658969 +0000 UTC m=+1821.275731195" Jan 23 12:23:09 crc kubenswrapper[4865]: I0123 12:23:09.118282 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:23:09 crc kubenswrapper[4865]: E0123 12:23:09.119925 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:23:23 crc kubenswrapper[4865]: I0123 12:23:23.117752 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:23:23 crc kubenswrapper[4865]: E0123 12:23:23.118985 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:23:38 crc kubenswrapper[4865]: I0123 12:23:38.119131 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:23:38 crc kubenswrapper[4865]: E0123 12:23:38.119992 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:23:49 crc kubenswrapper[4865]: I0123 12:23:49.118259 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:23:49 crc kubenswrapper[4865]: E0123 12:23:49.119073 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:23:50 crc kubenswrapper[4865]: I0123 12:23:50.047300 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7cea-account-create-update-jt957"] Jan 23 12:23:50 crc kubenswrapper[4865]: I0123 12:23:50.063740 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7cea-account-create-update-jt957"] Jan 23 12:23:50 crc kubenswrapper[4865]: I0123 12:23:50.132816 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73cc2832-0e03-44be-8cef-a9af622068cf" path="/var/lib/kubelet/pods/73cc2832-0e03-44be-8cef-a9af622068cf/volumes" Jan 23 12:23:51 crc kubenswrapper[4865]: I0123 12:23:51.033621 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-42dc-account-create-update-xcdlv"] Jan 23 12:23:51 crc kubenswrapper[4865]: I0123 12:23:51.041339 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-l8x7m"] Jan 23 12:23:51 crc kubenswrapper[4865]: I0123 12:23:51.048949 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-15f5-account-create-update-bxfkf"] Jan 23 12:23:51 crc kubenswrapper[4865]: I0123 12:23:51.059167 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fdtph"] Jan 23 12:23:51 crc kubenswrapper[4865]: I0123 12:23:51.066681 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-42dc-account-create-update-xcdlv"] Jan 23 12:23:51 crc kubenswrapper[4865]: I0123 12:23:51.073470 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-l8x7m"] Jan 23 12:23:51 crc kubenswrapper[4865]: I0123 12:23:51.080616 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-15f5-account-create-update-bxfkf"] Jan 23 12:23:51 crc kubenswrapper[4865]: I0123 12:23:51.088175 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jbdnl"] Jan 23 12:23:51 crc kubenswrapper[4865]: I0123 12:23:51.099081 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fdtph"] Jan 23 12:23:51 crc kubenswrapper[4865]: I0123 12:23:51.110266 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jbdnl"] Jan 23 12:23:52 crc kubenswrapper[4865]: I0123 12:23:52.130284 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395ce09d-bc53-4477-a5a7-c9ee9ab183ca" path="/var/lib/kubelet/pods/395ce09d-bc53-4477-a5a7-c9ee9ab183ca/volumes" Jan 23 12:23:52 crc kubenswrapper[4865]: I0123 12:23:52.131299 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c20d774-960c-4422-8fa5-2cdc2a6806fe" path="/var/lib/kubelet/pods/4c20d774-960c-4422-8fa5-2cdc2a6806fe/volumes" Jan 23 12:23:52 crc kubenswrapper[4865]: I0123 12:23:52.132187 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec931e2-d1cc-4fe3-aaa2-4bb4b141acad" path="/var/lib/kubelet/pods/bec931e2-d1cc-4fe3-aaa2-4bb4b141acad/volumes" Jan 23 12:23:52 crc kubenswrapper[4865]: I0123 12:23:52.133021 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0cf61d9-b096-4cea-a55e-e54176112f74" path="/var/lib/kubelet/pods/c0cf61d9-b096-4cea-a55e-e54176112f74/volumes" Jan 23 12:23:52 crc kubenswrapper[4865]: I0123 12:23:52.134285 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2b9cf60-06e6-462b-a56d-067f710d4efb" path="/var/lib/kubelet/pods/e2b9cf60-06e6-462b-a56d-067f710d4efb/volumes" Jan 23 12:23:52 crc kubenswrapper[4865]: I0123 12:23:52.999861 4865 scope.go:117] "RemoveContainer" containerID="37ae52e5c7ba61596a6f379b708473bef10c2dc88e77d8dc66cd7975b71d7532" Jan 23 12:23:53 crc kubenswrapper[4865]: I0123 12:23:53.043471 4865 scope.go:117] "RemoveContainer" containerID="a25df821220f84802a1a708e0a82db1e9d97e6549f229a8264eb31ac3fd33d02" Jan 23 12:23:53 crc kubenswrapper[4865]: I0123 12:23:53.075549 4865 scope.go:117] "RemoveContainer" containerID="17966645b840cb4d6436ea87662f1de3aec6998ceefce2b3e85e5303efcf3ff2" Jan 23 12:23:53 crc kubenswrapper[4865]: I0123 12:23:53.115089 4865 scope.go:117] "RemoveContainer" containerID="036cf6765ec8ca27aec5a07feb99a91870a62ee29ab4901b93855af0dc28d39e" Jan 23 12:23:53 crc kubenswrapper[4865]: I0123 12:23:53.168381 4865 scope.go:117] "RemoveContainer" containerID="9d12564bf440a8d6d3f4d76a1a9dbfa6604e3f09fc2c5046cb9ca90944da45d3" Jan 23 12:23:53 crc kubenswrapper[4865]: I0123 12:23:53.208323 4865 scope.go:117] "RemoveContainer" containerID="92214d2d7c5655ee86575ed783d48c87c5b23b09863eec3d1dac4a434bf7cb11" Jan 23 12:24:03 crc kubenswrapper[4865]: I0123 12:24:03.118587 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:24:03 crc kubenswrapper[4865]: E0123 12:24:03.119283 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:24:17 crc kubenswrapper[4865]: I0123 12:24:17.118412 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:24:17 crc kubenswrapper[4865]: E0123 12:24:17.119189 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:24:28 crc kubenswrapper[4865]: I0123 12:24:28.118912 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:24:28 crc kubenswrapper[4865]: E0123 12:24:28.119632 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:24:41 crc kubenswrapper[4865]: I0123 12:24:41.086109 4865 generic.go:334] "Generic (PLEG): container finished" podID="c32dfe8e-f816-417e-a439-df8d8ecef673" containerID="5bc22d99b30c19f4ab27eb5e470bb7db08517e3859475907f2461033cb9aebef" exitCode=0 Jan 23 12:24:41 crc kubenswrapper[4865]: I0123 12:24:41.086189 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" event={"ID":"c32dfe8e-f816-417e-a439-df8d8ecef673","Type":"ContainerDied","Data":"5bc22d99b30c19f4ab27eb5e470bb7db08517e3859475907f2461033cb9aebef"} Jan 23 12:24:41 crc kubenswrapper[4865]: I0123 12:24:41.118339 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:24:41 crc kubenswrapper[4865]: E0123 12:24:41.118582 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:24:42 crc kubenswrapper[4865]: I0123 12:24:42.538382 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:24:42 crc kubenswrapper[4865]: I0123 12:24:42.582868 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvhln\" (UniqueName: \"kubernetes.io/projected/c32dfe8e-f816-417e-a439-df8d8ecef673-kube-api-access-mvhln\") pod \"c32dfe8e-f816-417e-a439-df8d8ecef673\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " Jan 23 12:24:42 crc kubenswrapper[4865]: I0123 12:24:42.582966 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-ssh-key-openstack-edpm-ipam\") pod \"c32dfe8e-f816-417e-a439-df8d8ecef673\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " Jan 23 12:24:42 crc kubenswrapper[4865]: I0123 12:24:42.583050 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-inventory\") pod \"c32dfe8e-f816-417e-a439-df8d8ecef673\" (UID: \"c32dfe8e-f816-417e-a439-df8d8ecef673\") " Jan 23 12:24:42 crc kubenswrapper[4865]: I0123 12:24:42.593084 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32dfe8e-f816-417e-a439-df8d8ecef673-kube-api-access-mvhln" (OuterVolumeSpecName: "kube-api-access-mvhln") pod "c32dfe8e-f816-417e-a439-df8d8ecef673" (UID: "c32dfe8e-f816-417e-a439-df8d8ecef673"). InnerVolumeSpecName "kube-api-access-mvhln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:24:42 crc kubenswrapper[4865]: I0123 12:24:42.617889 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-inventory" (OuterVolumeSpecName: "inventory") pod "c32dfe8e-f816-417e-a439-df8d8ecef673" (UID: "c32dfe8e-f816-417e-a439-df8d8ecef673"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:24:42 crc kubenswrapper[4865]: I0123 12:24:42.619217 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c32dfe8e-f816-417e-a439-df8d8ecef673" (UID: "c32dfe8e-f816-417e-a439-df8d8ecef673"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:24:42 crc kubenswrapper[4865]: I0123 12:24:42.684635 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:24:42 crc kubenswrapper[4865]: I0123 12:24:42.684682 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvhln\" (UniqueName: \"kubernetes.io/projected/c32dfe8e-f816-417e-a439-df8d8ecef673-kube-api-access-mvhln\") on node \"crc\" DevicePath \"\"" Jan 23 12:24:42 crc kubenswrapper[4865]: I0123 12:24:42.684700 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c32dfe8e-f816-417e-a439-df8d8ecef673-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.110069 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" event={"ID":"c32dfe8e-f816-417e-a439-df8d8ecef673","Type":"ContainerDied","Data":"71ae95e3d3e1d71baebf7e588fe0e313f4b3c2cf7fd9b80fc5420a2d8d7c55e8"} Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.110108 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71ae95e3d3e1d71baebf7e588fe0e313f4b3c2cf7fd9b80fc5420a2d8d7c55e8" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.110134 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-g85dv" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.206141 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f"] Jan 23 12:24:43 crc kubenswrapper[4865]: E0123 12:24:43.206780 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32dfe8e-f816-417e-a439-df8d8ecef673" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.206804 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32dfe8e-f816-417e-a439-df8d8ecef673" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.207058 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32dfe8e-f816-417e-a439-df8d8ecef673" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.207942 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.210372 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.210691 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.210873 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.211007 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.232532 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f"] Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.294488 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.294537 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.294802 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjj2\" (UniqueName: \"kubernetes.io/projected/dee9d41d-2bda-459b-be37-6fe8d5a762df-kube-api-access-ffjj2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.396681 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffjj2\" (UniqueName: \"kubernetes.io/projected/dee9d41d-2bda-459b-be37-6fe8d5a762df-kube-api-access-ffjj2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.396778 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.396821 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.400685 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.401564 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.417498 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffjj2\" (UniqueName: \"kubernetes.io/projected/dee9d41d-2bda-459b-be37-6fe8d5a762df-kube-api-access-ffjj2\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:43 crc kubenswrapper[4865]: I0123 12:24:43.533389 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:24:44 crc kubenswrapper[4865]: I0123 12:24:44.080431 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f"] Jan 23 12:24:44 crc kubenswrapper[4865]: I0123 12:24:44.129390 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" event={"ID":"dee9d41d-2bda-459b-be37-6fe8d5a762df","Type":"ContainerStarted","Data":"c5eea04ddd89302c518ef01c4ddd69b7ad52cfdd4e4535a8e2e9151c6cffd1e4"} Jan 23 12:24:45 crc kubenswrapper[4865]: I0123 12:24:45.141391 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" event={"ID":"dee9d41d-2bda-459b-be37-6fe8d5a762df","Type":"ContainerStarted","Data":"88cdb959c29f52ac1ed68756ac1c912a3cf3b7af7b2bc22249966c2ecc395fc6"} Jan 23 12:24:45 crc kubenswrapper[4865]: I0123 12:24:45.175142 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" podStartSLOduration=1.651503213 podStartE2EDuration="2.175121326s" podCreationTimestamp="2026-01-23 12:24:43 +0000 UTC" firstStartedPulling="2026-01-23 12:24:44.067941127 +0000 UTC m=+1928.237013353" lastFinishedPulling="2026-01-23 12:24:44.59155924 +0000 UTC m=+1928.760631466" observedRunningTime="2026-01-23 12:24:45.166380413 +0000 UTC m=+1929.335452649" watchObservedRunningTime="2026-01-23 12:24:45.175121326 +0000 UTC m=+1929.344193552" Jan 23 12:24:56 crc kubenswrapper[4865]: I0123 12:24:56.126190 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:24:56 crc kubenswrapper[4865]: E0123 12:24:56.127237 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:25:00 crc kubenswrapper[4865]: I0123 12:25:00.054626 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rq28k"] Jan 23 12:25:00 crc kubenswrapper[4865]: I0123 12:25:00.061934 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rq28k"] Jan 23 12:25:00 crc kubenswrapper[4865]: I0123 12:25:00.132773 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb70ef0d-40c1-4ee9-b73e-98b471e378c2" path="/var/lib/kubelet/pods/bb70ef0d-40c1-4ee9-b73e-98b471e378c2/volumes" Jan 23 12:25:08 crc kubenswrapper[4865]: I0123 12:25:08.118532 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:25:08 crc kubenswrapper[4865]: E0123 12:25:08.119593 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:25:23 crc kubenswrapper[4865]: I0123 12:25:23.118533 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:25:23 crc kubenswrapper[4865]: E0123 12:25:23.119396 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:25:26 crc kubenswrapper[4865]: I0123 12:25:26.059789 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-d9nb4"] Jan 23 12:25:26 crc kubenswrapper[4865]: I0123 12:25:26.070020 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-d9nb4"] Jan 23 12:25:26 crc kubenswrapper[4865]: I0123 12:25:26.130324 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c09d8bf-3db6-47b0-b099-fe6be61d003f" path="/var/lib/kubelet/pods/5c09d8bf-3db6-47b0-b099-fe6be61d003f/volumes" Jan 23 12:25:30 crc kubenswrapper[4865]: I0123 12:25:30.033488 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zd6bk"] Jan 23 12:25:30 crc kubenswrapper[4865]: I0123 12:25:30.042973 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zd6bk"] Jan 23 12:25:30 crc kubenswrapper[4865]: I0123 12:25:30.128587 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88f42439-e9e1-4de1-93fa-22a56502e805" path="/var/lib/kubelet/pods/88f42439-e9e1-4de1-93fa-22a56502e805/volumes" Jan 23 12:25:35 crc kubenswrapper[4865]: I0123 12:25:35.118800 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:25:35 crc kubenswrapper[4865]: E0123 12:25:35.119623 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:25:47 crc kubenswrapper[4865]: I0123 12:25:47.118122 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:25:47 crc kubenswrapper[4865]: E0123 12:25:47.118917 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:25:53 crc kubenswrapper[4865]: I0123 12:25:53.354205 4865 scope.go:117] "RemoveContainer" containerID="e5decde60c0e82eae35d75f68343d15038384d920e1d819c22bed60ed4575d97" Jan 23 12:25:53 crc kubenswrapper[4865]: I0123 12:25:53.398157 4865 scope.go:117] "RemoveContainer" containerID="ce27f782c75c978fbeed8ac2146424543175e584ca44631991f76da6b731d27c" Jan 23 12:25:53 crc kubenswrapper[4865]: I0123 12:25:53.455122 4865 scope.go:117] "RemoveContainer" containerID="1be52bc48bdbefd252086db62ae062d61449059896de0d866760ad8f508b4b9a" Jan 23 12:25:58 crc kubenswrapper[4865]: I0123 12:25:58.118715 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:25:58 crc kubenswrapper[4865]: I0123 12:25:58.806888 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"9e0926f65a291664cd50747401caf9b0e97c344dae0b4a2a81e701f0c1468f90"} Jan 23 12:26:11 crc kubenswrapper[4865]: I0123 12:26:11.043914 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tmtvq"] Jan 23 12:26:11 crc kubenswrapper[4865]: I0123 12:26:11.053218 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tmtvq"] Jan 23 12:26:12 crc kubenswrapper[4865]: I0123 12:26:12.131512 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845c3ecd-d398-4524-b2ef-ff90c88fb498" path="/var/lib/kubelet/pods/845c3ecd-d398-4524-b2ef-ff90c88fb498/volumes" Jan 23 12:26:13 crc kubenswrapper[4865]: I0123 12:26:13.946660 4865 generic.go:334] "Generic (PLEG): container finished" podID="dee9d41d-2bda-459b-be37-6fe8d5a762df" containerID="88cdb959c29f52ac1ed68756ac1c912a3cf3b7af7b2bc22249966c2ecc395fc6" exitCode=0 Jan 23 12:26:13 crc kubenswrapper[4865]: I0123 12:26:13.947926 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" event={"ID":"dee9d41d-2bda-459b-be37-6fe8d5a762df","Type":"ContainerDied","Data":"88cdb959c29f52ac1ed68756ac1c912a3cf3b7af7b2bc22249966c2ecc395fc6"} Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.371582 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.476660 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffjj2\" (UniqueName: \"kubernetes.io/projected/dee9d41d-2bda-459b-be37-6fe8d5a762df-kube-api-access-ffjj2\") pod \"dee9d41d-2bda-459b-be37-6fe8d5a762df\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.476762 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-ssh-key-openstack-edpm-ipam\") pod \"dee9d41d-2bda-459b-be37-6fe8d5a762df\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.476841 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-inventory\") pod \"dee9d41d-2bda-459b-be37-6fe8d5a762df\" (UID: \"dee9d41d-2bda-459b-be37-6fe8d5a762df\") " Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.482091 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dee9d41d-2bda-459b-be37-6fe8d5a762df-kube-api-access-ffjj2" (OuterVolumeSpecName: "kube-api-access-ffjj2") pod "dee9d41d-2bda-459b-be37-6fe8d5a762df" (UID: "dee9d41d-2bda-459b-be37-6fe8d5a762df"). InnerVolumeSpecName "kube-api-access-ffjj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.505896 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dee9d41d-2bda-459b-be37-6fe8d5a762df" (UID: "dee9d41d-2bda-459b-be37-6fe8d5a762df"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.506777 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-inventory" (OuterVolumeSpecName: "inventory") pod "dee9d41d-2bda-459b-be37-6fe8d5a762df" (UID: "dee9d41d-2bda-459b-be37-6fe8d5a762df"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.578749 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffjj2\" (UniqueName: \"kubernetes.io/projected/dee9d41d-2bda-459b-be37-6fe8d5a762df-kube-api-access-ffjj2\") on node \"crc\" DevicePath \"\"" Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.578785 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.578796 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dee9d41d-2bda-459b-be37-6fe8d5a762df-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.970743 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" event={"ID":"dee9d41d-2bda-459b-be37-6fe8d5a762df","Type":"ContainerDied","Data":"c5eea04ddd89302c518ef01c4ddd69b7ad52cfdd4e4535a8e2e9151c6cffd1e4"} Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.970801 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5eea04ddd89302c518ef01c4ddd69b7ad52cfdd4e4535a8e2e9151c6cffd1e4" Jan 23 12:26:15 crc kubenswrapper[4865]: I0123 12:26:15.970877 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-bnm2f" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.079945 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69"] Jan 23 12:26:16 crc kubenswrapper[4865]: E0123 12:26:16.080478 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee9d41d-2bda-459b-be37-6fe8d5a762df" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.080500 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee9d41d-2bda-459b-be37-6fe8d5a762df" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.080885 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="dee9d41d-2bda-459b-be37-6fe8d5a762df" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.081737 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.086177 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.086239 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.086570 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.086933 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.094519 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69"] Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.191534 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fgr69\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.191642 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p55v2\" (UniqueName: \"kubernetes.io/projected/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-kube-api-access-p55v2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fgr69\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.191693 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fgr69\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.294124 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fgr69\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.294415 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fgr69\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.294460 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p55v2\" (UniqueName: \"kubernetes.io/projected/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-kube-api-access-p55v2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fgr69\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.298407 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fgr69\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.307803 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fgr69\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.317668 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p55v2\" (UniqueName: \"kubernetes.io/projected/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-kube-api-access-p55v2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-fgr69\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.412630 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.960162 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69"] Jan 23 12:26:16 crc kubenswrapper[4865]: I0123 12:26:16.983386 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" event={"ID":"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e","Type":"ContainerStarted","Data":"e7486df7d121a9614430fdede43e508b6e600b3c6cff79fd2b297d779b0fcafa"} Jan 23 12:26:18 crc kubenswrapper[4865]: I0123 12:26:18.999554 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" event={"ID":"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e","Type":"ContainerStarted","Data":"aad74d7093de0974ec34de6b717b2420985b276b0f4573a72d4d8b3887d10e96"} Jan 23 12:26:19 crc kubenswrapper[4865]: I0123 12:26:19.017146 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" podStartSLOduration=1.882322481 podStartE2EDuration="3.017127158s" podCreationTimestamp="2026-01-23 12:26:16 +0000 UTC" firstStartedPulling="2026-01-23 12:26:16.967258494 +0000 UTC m=+2021.136330730" lastFinishedPulling="2026-01-23 12:26:18.102063181 +0000 UTC m=+2022.271135407" observedRunningTime="2026-01-23 12:26:19.013383997 +0000 UTC m=+2023.182456233" watchObservedRunningTime="2026-01-23 12:26:19.017127158 +0000 UTC m=+2023.186199384" Jan 23 12:26:24 crc kubenswrapper[4865]: I0123 12:26:24.040907 4865 generic.go:334] "Generic (PLEG): container finished" podID="a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e" containerID="aad74d7093de0974ec34de6b717b2420985b276b0f4573a72d4d8b3887d10e96" exitCode=0 Jan 23 12:26:24 crc kubenswrapper[4865]: I0123 12:26:24.041076 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" event={"ID":"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e","Type":"ContainerDied","Data":"aad74d7093de0974ec34de6b717b2420985b276b0f4573a72d4d8b3887d10e96"} Jan 23 12:26:25 crc kubenswrapper[4865]: I0123 12:26:25.458770 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:25 crc kubenswrapper[4865]: I0123 12:26:25.524399 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-ssh-key-openstack-edpm-ipam\") pod \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " Jan 23 12:26:25 crc kubenswrapper[4865]: I0123 12:26:25.524506 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-inventory\") pod \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " Jan 23 12:26:25 crc kubenswrapper[4865]: I0123 12:26:25.524583 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p55v2\" (UniqueName: \"kubernetes.io/projected/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-kube-api-access-p55v2\") pod \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\" (UID: \"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e\") " Jan 23 12:26:25 crc kubenswrapper[4865]: I0123 12:26:25.545346 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-kube-api-access-p55v2" (OuterVolumeSpecName: "kube-api-access-p55v2") pod "a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e" (UID: "a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e"). InnerVolumeSpecName "kube-api-access-p55v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:26:25 crc kubenswrapper[4865]: I0123 12:26:25.558577 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e" (UID: "a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:26:25 crc kubenswrapper[4865]: I0123 12:26:25.559084 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-inventory" (OuterVolumeSpecName: "inventory") pod "a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e" (UID: "a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:26:25 crc kubenswrapper[4865]: I0123 12:26:25.627779 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:26:25 crc kubenswrapper[4865]: I0123 12:26:25.627829 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:26:25 crc kubenswrapper[4865]: I0123 12:26:25.627842 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p55v2\" (UniqueName: \"kubernetes.io/projected/a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e-kube-api-access-p55v2\") on node \"crc\" DevicePath \"\"" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.062687 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" event={"ID":"a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e","Type":"ContainerDied","Data":"e7486df7d121a9614430fdede43e508b6e600b3c6cff79fd2b297d779b0fcafa"} Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.063024 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7486df7d121a9614430fdede43e508b6e600b3c6cff79fd2b297d779b0fcafa" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.063108 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-fgr69" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.156129 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98"] Jan 23 12:26:26 crc kubenswrapper[4865]: E0123 12:26:26.156643 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.156663 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.156868 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61ab57f-7800-4bc5-a0ab-0a8c46c8b87e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.157699 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.160965 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.161334 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.161507 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.162867 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.174955 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98"] Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.238859 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpb98\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.238952 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbrdm\" (UniqueName: \"kubernetes.io/projected/883e6533-27db-428c-a7f1-14fd08e18f1a-kube-api-access-sbrdm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpb98\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.238988 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpb98\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.340483 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrdm\" (UniqueName: \"kubernetes.io/projected/883e6533-27db-428c-a7f1-14fd08e18f1a-kube-api-access-sbrdm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpb98\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.340671 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpb98\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.341126 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpb98\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.345054 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpb98\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.346833 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpb98\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.373002 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbrdm\" (UniqueName: \"kubernetes.io/projected/883e6533-27db-428c-a7f1-14fd08e18f1a-kube-api-access-sbrdm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kpb98\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:26 crc kubenswrapper[4865]: I0123 12:26:26.508459 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:26:27 crc kubenswrapper[4865]: I0123 12:26:27.026503 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98"] Jan 23 12:26:27 crc kubenswrapper[4865]: I0123 12:26:27.071724 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" event={"ID":"883e6533-27db-428c-a7f1-14fd08e18f1a","Type":"ContainerStarted","Data":"22a56140139c40d0dd31936f5a908708c4e0872ab78ba01f5031d1a6b33c059b"} Jan 23 12:26:28 crc kubenswrapper[4865]: I0123 12:26:28.081724 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" event={"ID":"883e6533-27db-428c-a7f1-14fd08e18f1a","Type":"ContainerStarted","Data":"cefb75d5ad72e85d48d21663b9c68c84fbf7a7d460eda5e8e6cbb6a44a75af78"} Jan 23 12:26:28 crc kubenswrapper[4865]: I0123 12:26:28.102833 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" podStartSLOduration=1.493192438 podStartE2EDuration="2.102815563s" podCreationTimestamp="2026-01-23 12:26:26 +0000 UTC" firstStartedPulling="2026-01-23 12:26:27.034100796 +0000 UTC m=+2031.203173022" lastFinishedPulling="2026-01-23 12:26:27.643723921 +0000 UTC m=+2031.812796147" observedRunningTime="2026-01-23 12:26:28.099991584 +0000 UTC m=+2032.269063810" watchObservedRunningTime="2026-01-23 12:26:28.102815563 +0000 UTC m=+2032.271887789" Jan 23 12:26:53 crc kubenswrapper[4865]: I0123 12:26:53.565718 4865 scope.go:117] "RemoveContainer" containerID="5d07687820a4c583231d0b28e336f409654bde9c6d7446b9955828005453d04e" Jan 23 12:27:13 crc kubenswrapper[4865]: I0123 12:27:13.484097 4865 generic.go:334] "Generic (PLEG): container finished" podID="883e6533-27db-428c-a7f1-14fd08e18f1a" containerID="cefb75d5ad72e85d48d21663b9c68c84fbf7a7d460eda5e8e6cbb6a44a75af78" exitCode=0 Jan 23 12:27:13 crc kubenswrapper[4865]: I0123 12:27:13.484232 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" event={"ID":"883e6533-27db-428c-a7f1-14fd08e18f1a","Type":"ContainerDied","Data":"cefb75d5ad72e85d48d21663b9c68c84fbf7a7d460eda5e8e6cbb6a44a75af78"} Jan 23 12:27:14 crc kubenswrapper[4865]: I0123 12:27:14.882281 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:27:14 crc kubenswrapper[4865]: I0123 12:27:14.990187 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-ssh-key-openstack-edpm-ipam\") pod \"883e6533-27db-428c-a7f1-14fd08e18f1a\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " Jan 23 12:27:14 crc kubenswrapper[4865]: I0123 12:27:14.990363 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-inventory\") pod \"883e6533-27db-428c-a7f1-14fd08e18f1a\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " Jan 23 12:27:14 crc kubenswrapper[4865]: I0123 12:27:14.990511 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbrdm\" (UniqueName: \"kubernetes.io/projected/883e6533-27db-428c-a7f1-14fd08e18f1a-kube-api-access-sbrdm\") pod \"883e6533-27db-428c-a7f1-14fd08e18f1a\" (UID: \"883e6533-27db-428c-a7f1-14fd08e18f1a\") " Jan 23 12:27:14 crc kubenswrapper[4865]: I0123 12:27:14.997531 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883e6533-27db-428c-a7f1-14fd08e18f1a-kube-api-access-sbrdm" (OuterVolumeSpecName: "kube-api-access-sbrdm") pod "883e6533-27db-428c-a7f1-14fd08e18f1a" (UID: "883e6533-27db-428c-a7f1-14fd08e18f1a"). InnerVolumeSpecName "kube-api-access-sbrdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.024228 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-inventory" (OuterVolumeSpecName: "inventory") pod "883e6533-27db-428c-a7f1-14fd08e18f1a" (UID: "883e6533-27db-428c-a7f1-14fd08e18f1a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.024945 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "883e6533-27db-428c-a7f1-14fd08e18f1a" (UID: "883e6533-27db-428c-a7f1-14fd08e18f1a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.093162 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbrdm\" (UniqueName: \"kubernetes.io/projected/883e6533-27db-428c-a7f1-14fd08e18f1a-kube-api-access-sbrdm\") on node \"crc\" DevicePath \"\"" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.093219 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.093233 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/883e6533-27db-428c-a7f1-14fd08e18f1a-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.503436 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" event={"ID":"883e6533-27db-428c-a7f1-14fd08e18f1a","Type":"ContainerDied","Data":"22a56140139c40d0dd31936f5a908708c4e0872ab78ba01f5031d1a6b33c059b"} Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.503926 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22a56140139c40d0dd31936f5a908708c4e0872ab78ba01f5031d1a6b33c059b" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.503578 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kpb98" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.601088 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c"] Jan 23 12:27:15 crc kubenswrapper[4865]: E0123 12:27:15.601438 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883e6533-27db-428c-a7f1-14fd08e18f1a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.601460 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="883e6533-27db-428c-a7f1-14fd08e18f1a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.601658 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="883e6533-27db-428c-a7f1-14fd08e18f1a" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.602268 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.605559 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.605849 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.606257 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.608064 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.618300 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c"] Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.706492 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.706545 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzs8x\" (UniqueName: \"kubernetes.io/projected/89be1258-dd4c-4786-8593-03311651ab13-kube-api-access-pzs8x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.706878 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.808850 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.808992 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.809019 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzs8x\" (UniqueName: \"kubernetes.io/projected/89be1258-dd4c-4786-8593-03311651ab13-kube-api-access-pzs8x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.813937 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.814026 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.828272 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzs8x\" (UniqueName: \"kubernetes.io/projected/89be1258-dd4c-4786-8593-03311651ab13-kube-api-access-pzs8x\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:15 crc kubenswrapper[4865]: I0123 12:27:15.939066 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:27:16 crc kubenswrapper[4865]: I0123 12:27:16.532589 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c"] Jan 23 12:27:17 crc kubenswrapper[4865]: I0123 12:27:17.519678 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" event={"ID":"89be1258-dd4c-4786-8593-03311651ab13","Type":"ContainerStarted","Data":"8cac25293f46af1096cb00fa4f20c5c3f2d761aef3b70c1af95efe903d749f8f"} Jan 23 12:27:17 crc kubenswrapper[4865]: I0123 12:27:17.520984 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" event={"ID":"89be1258-dd4c-4786-8593-03311651ab13","Type":"ContainerStarted","Data":"3557124d717f2e6a93c8e1194da5d216046c54deb706e9263a2fb09550818adc"} Jan 23 12:27:17 crc kubenswrapper[4865]: I0123 12:27:17.542805 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" podStartSLOduration=2.08832893 podStartE2EDuration="2.54278402s" podCreationTimestamp="2026-01-23 12:27:15 +0000 UTC" firstStartedPulling="2026-01-23 12:27:16.539048366 +0000 UTC m=+2080.708120602" lastFinishedPulling="2026-01-23 12:27:16.993503466 +0000 UTC m=+2081.162575692" observedRunningTime="2026-01-23 12:27:17.532734557 +0000 UTC m=+2081.701806783" watchObservedRunningTime="2026-01-23 12:27:17.54278402 +0000 UTC m=+2081.711856256" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.035642 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6q2wx"] Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.038027 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.054304 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6q2wx"] Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.186204 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-catalog-content\") pod \"certified-operators-6q2wx\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.186557 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-utilities\") pod \"certified-operators-6q2wx\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.186658 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgjvx\" (UniqueName: \"kubernetes.io/projected/2386f6f2-2686-4c24-b07e-f7aedddc20ee-kube-api-access-hgjvx\") pod \"certified-operators-6q2wx\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.288334 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-catalog-content\") pod \"certified-operators-6q2wx\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.288527 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-utilities\") pod \"certified-operators-6q2wx\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.288558 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgjvx\" (UniqueName: \"kubernetes.io/projected/2386f6f2-2686-4c24-b07e-f7aedddc20ee-kube-api-access-hgjvx\") pod \"certified-operators-6q2wx\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.289710 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-utilities\") pod \"certified-operators-6q2wx\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.289825 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-catalog-content\") pod \"certified-operators-6q2wx\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.310506 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgjvx\" (UniqueName: \"kubernetes.io/projected/2386f6f2-2686-4c24-b07e-f7aedddc20ee-kube-api-access-hgjvx\") pod \"certified-operators-6q2wx\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.361564 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:30 crc kubenswrapper[4865]: I0123 12:27:30.952710 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6q2wx"] Jan 23 12:27:30 crc kubenswrapper[4865]: W0123 12:27:30.957105 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2386f6f2_2686_4c24_b07e_f7aedddc20ee.slice/crio-3b1882450698396bb9a12c9be6874af03e03527b7a89777cc7fbadca78fdd70b WatchSource:0}: Error finding container 3b1882450698396bb9a12c9be6874af03e03527b7a89777cc7fbadca78fdd70b: Status 404 returned error can't find the container with id 3b1882450698396bb9a12c9be6874af03e03527b7a89777cc7fbadca78fdd70b Jan 23 12:27:31 crc kubenswrapper[4865]: I0123 12:27:31.627919 4865 generic.go:334] "Generic (PLEG): container finished" podID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerID="e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c" exitCode=0 Jan 23 12:27:31 crc kubenswrapper[4865]: I0123 12:27:31.628019 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q2wx" event={"ID":"2386f6f2-2686-4c24-b07e-f7aedddc20ee","Type":"ContainerDied","Data":"e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c"} Jan 23 12:27:31 crc kubenswrapper[4865]: I0123 12:27:31.628223 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q2wx" event={"ID":"2386f6f2-2686-4c24-b07e-f7aedddc20ee","Type":"ContainerStarted","Data":"3b1882450698396bb9a12c9be6874af03e03527b7a89777cc7fbadca78fdd70b"} Jan 23 12:27:32 crc kubenswrapper[4865]: I0123 12:27:32.638170 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q2wx" event={"ID":"2386f6f2-2686-4c24-b07e-f7aedddc20ee","Type":"ContainerStarted","Data":"779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57"} Jan 23 12:27:34 crc kubenswrapper[4865]: I0123 12:27:34.657541 4865 generic.go:334] "Generic (PLEG): container finished" podID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerID="779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57" exitCode=0 Jan 23 12:27:34 crc kubenswrapper[4865]: I0123 12:27:34.657627 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q2wx" event={"ID":"2386f6f2-2686-4c24-b07e-f7aedddc20ee","Type":"ContainerDied","Data":"779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57"} Jan 23 12:27:35 crc kubenswrapper[4865]: I0123 12:27:35.670787 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q2wx" event={"ID":"2386f6f2-2686-4c24-b07e-f7aedddc20ee","Type":"ContainerStarted","Data":"0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35"} Jan 23 12:27:35 crc kubenswrapper[4865]: I0123 12:27:35.692623 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6q2wx" podStartSLOduration=2.10931382 podStartE2EDuration="5.692588818s" podCreationTimestamp="2026-01-23 12:27:30 +0000 UTC" firstStartedPulling="2026-01-23 12:27:31.630254945 +0000 UTC m=+2095.799327171" lastFinishedPulling="2026-01-23 12:27:35.213529943 +0000 UTC m=+2099.382602169" observedRunningTime="2026-01-23 12:27:35.690869217 +0000 UTC m=+2099.859941443" watchObservedRunningTime="2026-01-23 12:27:35.692588818 +0000 UTC m=+2099.861661044" Jan 23 12:27:40 crc kubenswrapper[4865]: I0123 12:27:40.363168 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:40 crc kubenswrapper[4865]: I0123 12:27:40.363764 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:40 crc kubenswrapper[4865]: I0123 12:27:40.407965 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:40 crc kubenswrapper[4865]: I0123 12:27:40.774996 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:40 crc kubenswrapper[4865]: I0123 12:27:40.830439 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6q2wx"] Jan 23 12:27:42 crc kubenswrapper[4865]: I0123 12:27:42.749347 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6q2wx" podUID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerName="registry-server" containerID="cri-o://0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35" gracePeriod=2 Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.262995 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.463234 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-utilities\") pod \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.463299 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-catalog-content\") pod \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.463378 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgjvx\" (UniqueName: \"kubernetes.io/projected/2386f6f2-2686-4c24-b07e-f7aedddc20ee-kube-api-access-hgjvx\") pod \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\" (UID: \"2386f6f2-2686-4c24-b07e-f7aedddc20ee\") " Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.464337 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-utilities" (OuterVolumeSpecName: "utilities") pod "2386f6f2-2686-4c24-b07e-f7aedddc20ee" (UID: "2386f6f2-2686-4c24-b07e-f7aedddc20ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.469705 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2386f6f2-2686-4c24-b07e-f7aedddc20ee-kube-api-access-hgjvx" (OuterVolumeSpecName: "kube-api-access-hgjvx") pod "2386f6f2-2686-4c24-b07e-f7aedddc20ee" (UID: "2386f6f2-2686-4c24-b07e-f7aedddc20ee"). InnerVolumeSpecName "kube-api-access-hgjvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.519324 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2386f6f2-2686-4c24-b07e-f7aedddc20ee" (UID: "2386f6f2-2686-4c24-b07e-f7aedddc20ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.565747 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.565789 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2386f6f2-2686-4c24-b07e-f7aedddc20ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.565802 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgjvx\" (UniqueName: \"kubernetes.io/projected/2386f6f2-2686-4c24-b07e-f7aedddc20ee-kube-api-access-hgjvx\") on node \"crc\" DevicePath \"\"" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.757791 4865 generic.go:334] "Generic (PLEG): container finished" podID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerID="0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35" exitCode=0 Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.757830 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q2wx" event={"ID":"2386f6f2-2686-4c24-b07e-f7aedddc20ee","Type":"ContainerDied","Data":"0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35"} Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.757853 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6q2wx" event={"ID":"2386f6f2-2686-4c24-b07e-f7aedddc20ee","Type":"ContainerDied","Data":"3b1882450698396bb9a12c9be6874af03e03527b7a89777cc7fbadca78fdd70b"} Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.757871 4865 scope.go:117] "RemoveContainer" containerID="0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.757901 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6q2wx" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.779097 4865 scope.go:117] "RemoveContainer" containerID="779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.803754 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6q2wx"] Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.815804 4865 scope.go:117] "RemoveContainer" containerID="e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.816488 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6q2wx"] Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.848748 4865 scope.go:117] "RemoveContainer" containerID="0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35" Jan 23 12:27:43 crc kubenswrapper[4865]: E0123 12:27:43.849491 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35\": container with ID starting with 0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35 not found: ID does not exist" containerID="0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.849608 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35"} err="failed to get container status \"0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35\": rpc error: code = NotFound desc = could not find container \"0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35\": container with ID starting with 0d29bb1c22acb7e5d5ce08541127e64bf27ce0462124815514f696708dad6e35 not found: ID does not exist" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.849692 4865 scope.go:117] "RemoveContainer" containerID="779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57" Jan 23 12:27:43 crc kubenswrapper[4865]: E0123 12:27:43.850065 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57\": container with ID starting with 779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57 not found: ID does not exist" containerID="779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.850153 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57"} err="failed to get container status \"779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57\": rpc error: code = NotFound desc = could not find container \"779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57\": container with ID starting with 779c45897c5e674ea5445d15483348fc4a03a3e6ae1f7a71509394fcee51ea57 not found: ID does not exist" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.850284 4865 scope.go:117] "RemoveContainer" containerID="e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c" Jan 23 12:27:43 crc kubenswrapper[4865]: E0123 12:27:43.850635 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c\": container with ID starting with e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c not found: ID does not exist" containerID="e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c" Jan 23 12:27:43 crc kubenswrapper[4865]: I0123 12:27:43.850742 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c"} err="failed to get container status \"e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c\": rpc error: code = NotFound desc = could not find container \"e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c\": container with ID starting with e14302847044bd25a0fcb83aa46128c442c98b2272fae66e284bed0b79b9b46c not found: ID does not exist" Jan 23 12:27:44 crc kubenswrapper[4865]: I0123 12:27:44.128326 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" path="/var/lib/kubelet/pods/2386f6f2-2686-4c24-b07e-f7aedddc20ee/volumes" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.395091 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2km4c"] Jan 23 12:27:59 crc kubenswrapper[4865]: E0123 12:27:59.396030 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerName="extract-utilities" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.396044 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerName="extract-utilities" Jan 23 12:27:59 crc kubenswrapper[4865]: E0123 12:27:59.396060 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerName="extract-content" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.396066 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerName="extract-content" Jan 23 12:27:59 crc kubenswrapper[4865]: E0123 12:27:59.396104 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerName="registry-server" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.396112 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerName="registry-server" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.396313 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="2386f6f2-2686-4c24-b07e-f7aedddc20ee" containerName="registry-server" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.399929 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.414068 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2km4c"] Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.578333 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-catalog-content\") pod \"community-operators-2km4c\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.578707 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gvqz\" (UniqueName: \"kubernetes.io/projected/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-kube-api-access-5gvqz\") pod \"community-operators-2km4c\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.578851 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-utilities\") pod \"community-operators-2km4c\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.680470 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-catalog-content\") pod \"community-operators-2km4c\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.680517 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gvqz\" (UniqueName: \"kubernetes.io/projected/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-kube-api-access-5gvqz\") pod \"community-operators-2km4c\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.680593 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-utilities\") pod \"community-operators-2km4c\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.681081 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-utilities\") pod \"community-operators-2km4c\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.681081 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-catalog-content\") pod \"community-operators-2km4c\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.705155 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gvqz\" (UniqueName: \"kubernetes.io/projected/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-kube-api-access-5gvqz\") pod \"community-operators-2km4c\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:27:59 crc kubenswrapper[4865]: I0123 12:27:59.723575 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:28:00 crc kubenswrapper[4865]: I0123 12:28:00.290055 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2km4c"] Jan 23 12:28:00 crc kubenswrapper[4865]: I0123 12:28:00.933252 4865 generic.go:334] "Generic (PLEG): container finished" podID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerID="0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4" exitCode=0 Jan 23 12:28:00 crc kubenswrapper[4865]: I0123 12:28:00.933306 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2km4c" event={"ID":"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344","Type":"ContainerDied","Data":"0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4"} Jan 23 12:28:00 crc kubenswrapper[4865]: I0123 12:28:00.933509 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2km4c" event={"ID":"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344","Type":"ContainerStarted","Data":"ce025aa61bc27514f0603186c4074e9275b7cbc4a395ccc21218cc47d682e9a1"} Jan 23 12:28:00 crc kubenswrapper[4865]: I0123 12:28:00.934991 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 12:28:02 crc kubenswrapper[4865]: I0123 12:28:02.952851 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2km4c" event={"ID":"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344","Type":"ContainerStarted","Data":"c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd"} Jan 23 12:28:03 crc kubenswrapper[4865]: I0123 12:28:03.963548 4865 generic.go:334] "Generic (PLEG): container finished" podID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerID="c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd" exitCode=0 Jan 23 12:28:03 crc kubenswrapper[4865]: I0123 12:28:03.963678 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2km4c" event={"ID":"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344","Type":"ContainerDied","Data":"c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd"} Jan 23 12:28:04 crc kubenswrapper[4865]: I0123 12:28:04.971964 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2km4c" event={"ID":"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344","Type":"ContainerStarted","Data":"b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6"} Jan 23 12:28:04 crc kubenswrapper[4865]: I0123 12:28:04.991851 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2km4c" podStartSLOduration=2.559432695 podStartE2EDuration="5.991834431s" podCreationTimestamp="2026-01-23 12:27:59 +0000 UTC" firstStartedPulling="2026-01-23 12:28:00.934710284 +0000 UTC m=+2125.103782520" lastFinishedPulling="2026-01-23 12:28:04.36711203 +0000 UTC m=+2128.536184256" observedRunningTime="2026-01-23 12:28:04.988240154 +0000 UTC m=+2129.157312390" watchObservedRunningTime="2026-01-23 12:28:04.991834431 +0000 UTC m=+2129.160906667" Jan 23 12:28:09 crc kubenswrapper[4865]: I0123 12:28:09.725276 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:28:09 crc kubenswrapper[4865]: I0123 12:28:09.726849 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:28:09 crc kubenswrapper[4865]: I0123 12:28:09.794111 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:28:10 crc kubenswrapper[4865]: I0123 12:28:10.069892 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:28:10 crc kubenswrapper[4865]: I0123 12:28:10.141301 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2km4c"] Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.029755 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2km4c" podUID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerName="registry-server" containerID="cri-o://b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6" gracePeriod=2 Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.534448 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.656886 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gvqz\" (UniqueName: \"kubernetes.io/projected/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-kube-api-access-5gvqz\") pod \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.656927 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-catalog-content\") pod \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.657011 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-utilities\") pod \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\" (UID: \"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344\") " Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.657958 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-utilities" (OuterVolumeSpecName: "utilities") pod "eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" (UID: "eb304f70-6e6b-4ee4-adb6-b04d0f2d3344"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.665228 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-kube-api-access-5gvqz" (OuterVolumeSpecName: "kube-api-access-5gvqz") pod "eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" (UID: "eb304f70-6e6b-4ee4-adb6-b04d0f2d3344"). InnerVolumeSpecName "kube-api-access-5gvqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.718588 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" (UID: "eb304f70-6e6b-4ee4-adb6-b04d0f2d3344"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.760337 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gvqz\" (UniqueName: \"kubernetes.io/projected/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-kube-api-access-5gvqz\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.760631 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:12 crc kubenswrapper[4865]: I0123 12:28:12.760666 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.040883 4865 generic.go:334] "Generic (PLEG): container finished" podID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerID="b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6" exitCode=0 Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.040944 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2km4c" event={"ID":"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344","Type":"ContainerDied","Data":"b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6"} Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.040978 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2km4c" Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.041007 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2km4c" event={"ID":"eb304f70-6e6b-4ee4-adb6-b04d0f2d3344","Type":"ContainerDied","Data":"ce025aa61bc27514f0603186c4074e9275b7cbc4a395ccc21218cc47d682e9a1"} Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.041044 4865 scope.go:117] "RemoveContainer" containerID="b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6" Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.082896 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2km4c"] Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.090288 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2km4c"] Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.130209 4865 scope.go:117] "RemoveContainer" containerID="c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd" Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.151731 4865 scope.go:117] "RemoveContainer" containerID="0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4" Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.205718 4865 scope.go:117] "RemoveContainer" containerID="b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6" Jan 23 12:28:13 crc kubenswrapper[4865]: E0123 12:28:13.206255 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6\": container with ID starting with b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6 not found: ID does not exist" containerID="b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6" Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.206294 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6"} err="failed to get container status \"b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6\": rpc error: code = NotFound desc = could not find container \"b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6\": container with ID starting with b97a96b35b433c6c5e484fb3ceb3e5f1291ea6bc7ea71d0b47a5912108de4df6 not found: ID does not exist" Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.206314 4865 scope.go:117] "RemoveContainer" containerID="c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd" Jan 23 12:28:13 crc kubenswrapper[4865]: E0123 12:28:13.206959 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd\": container with ID starting with c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd not found: ID does not exist" containerID="c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd" Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.206979 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd"} err="failed to get container status \"c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd\": rpc error: code = NotFound desc = could not find container \"c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd\": container with ID starting with c6dcfdb7162c1db9dd858a318064d765743dfcc02f77bfecbf7428c637d914cd not found: ID does not exist" Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.206991 4865 scope.go:117] "RemoveContainer" containerID="0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4" Jan 23 12:28:13 crc kubenswrapper[4865]: E0123 12:28:13.207247 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4\": container with ID starting with 0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4 not found: ID does not exist" containerID="0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4" Jan 23 12:28:13 crc kubenswrapper[4865]: I0123 12:28:13.207325 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4"} err="failed to get container status \"0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4\": rpc error: code = NotFound desc = could not find container \"0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4\": container with ID starting with 0d1df4b0fcd8794b6d84192f6d750e8b444f55067a08b91a83755f50340304c4 not found: ID does not exist" Jan 23 12:28:14 crc kubenswrapper[4865]: I0123 12:28:14.131938 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" path="/var/lib/kubelet/pods/eb304f70-6e6b-4ee4-adb6-b04d0f2d3344/volumes" Jan 23 12:28:16 crc kubenswrapper[4865]: I0123 12:28:16.093175 4865 generic.go:334] "Generic (PLEG): container finished" podID="89be1258-dd4c-4786-8593-03311651ab13" containerID="8cac25293f46af1096cb00fa4f20c5c3f2d761aef3b70c1af95efe903d749f8f" exitCode=0 Jan 23 12:28:16 crc kubenswrapper[4865]: I0123 12:28:16.093410 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" event={"ID":"89be1258-dd4c-4786-8593-03311651ab13","Type":"ContainerDied","Data":"8cac25293f46af1096cb00fa4f20c5c3f2d761aef3b70c1af95efe903d749f8f"} Jan 23 12:28:17 crc kubenswrapper[4865]: I0123 12:28:17.584968 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:28:17 crc kubenswrapper[4865]: I0123 12:28:17.683079 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-inventory\") pod \"89be1258-dd4c-4786-8593-03311651ab13\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " Jan 23 12:28:17 crc kubenswrapper[4865]: I0123 12:28:17.683199 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-ssh-key-openstack-edpm-ipam\") pod \"89be1258-dd4c-4786-8593-03311651ab13\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " Jan 23 12:28:17 crc kubenswrapper[4865]: I0123 12:28:17.683271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzs8x\" (UniqueName: \"kubernetes.io/projected/89be1258-dd4c-4786-8593-03311651ab13-kube-api-access-pzs8x\") pod \"89be1258-dd4c-4786-8593-03311651ab13\" (UID: \"89be1258-dd4c-4786-8593-03311651ab13\") " Jan 23 12:28:17 crc kubenswrapper[4865]: I0123 12:28:17.713165 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89be1258-dd4c-4786-8593-03311651ab13-kube-api-access-pzs8x" (OuterVolumeSpecName: "kube-api-access-pzs8x") pod "89be1258-dd4c-4786-8593-03311651ab13" (UID: "89be1258-dd4c-4786-8593-03311651ab13"). InnerVolumeSpecName "kube-api-access-pzs8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:28:17 crc kubenswrapper[4865]: I0123 12:28:17.725494 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-inventory" (OuterVolumeSpecName: "inventory") pod "89be1258-dd4c-4786-8593-03311651ab13" (UID: "89be1258-dd4c-4786-8593-03311651ab13"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:28:17 crc kubenswrapper[4865]: I0123 12:28:17.761826 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89be1258-dd4c-4786-8593-03311651ab13" (UID: "89be1258-dd4c-4786-8593-03311651ab13"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:28:17 crc kubenswrapper[4865]: I0123 12:28:17.785820 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzs8x\" (UniqueName: \"kubernetes.io/projected/89be1258-dd4c-4786-8593-03311651ab13-kube-api-access-pzs8x\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:17 crc kubenswrapper[4865]: I0123 12:28:17.785853 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:17 crc kubenswrapper[4865]: I0123 12:28:17.785863 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89be1258-dd4c-4786-8593-03311651ab13-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.116308 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" event={"ID":"89be1258-dd4c-4786-8593-03311651ab13","Type":"ContainerDied","Data":"3557124d717f2e6a93c8e1194da5d216046c54deb706e9263a2fb09550818adc"} Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.116362 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-w2r4c" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.116371 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3557124d717f2e6a93c8e1194da5d216046c54deb706e9263a2fb09550818adc" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.243807 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zjxgt"] Jan 23 12:28:18 crc kubenswrapper[4865]: E0123 12:28:18.244258 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerName="extract-utilities" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.244285 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerName="extract-utilities" Jan 23 12:28:18 crc kubenswrapper[4865]: E0123 12:28:18.244303 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerName="registry-server" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.244360 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerName="registry-server" Jan 23 12:28:18 crc kubenswrapper[4865]: E0123 12:28:18.244383 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerName="extract-content" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.244394 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerName="extract-content" Jan 23 12:28:18 crc kubenswrapper[4865]: E0123 12:28:18.244416 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89be1258-dd4c-4786-8593-03311651ab13" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.244445 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="89be1258-dd4c-4786-8593-03311651ab13" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.244738 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="89be1258-dd4c-4786-8593-03311651ab13" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.244782 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb304f70-6e6b-4ee4-adb6-b04d0f2d3344" containerName="registry-server" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.245527 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.250889 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.251667 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.253478 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.253715 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zjxgt"] Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.253899 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.294656 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zjxgt\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.294742 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zjxgt\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.294824 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk2pf\" (UniqueName: \"kubernetes.io/projected/8f0de315-0274-43da-a14b-ce2dffe44752-kube-api-access-bk2pf\") pod \"ssh-known-hosts-edpm-deployment-zjxgt\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.396535 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zjxgt\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.396664 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk2pf\" (UniqueName: \"kubernetes.io/projected/8f0de315-0274-43da-a14b-ce2dffe44752-kube-api-access-bk2pf\") pod \"ssh-known-hosts-edpm-deployment-zjxgt\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.396731 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zjxgt\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.400330 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-zjxgt\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.404158 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-zjxgt\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.416239 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk2pf\" (UniqueName: \"kubernetes.io/projected/8f0de315-0274-43da-a14b-ce2dffe44752-kube-api-access-bk2pf\") pod \"ssh-known-hosts-edpm-deployment-zjxgt\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.566474 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.776336 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:28:18 crc kubenswrapper[4865]: I0123 12:28:18.776662 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:28:19 crc kubenswrapper[4865]: I0123 12:28:19.125731 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-zjxgt"] Jan 23 12:28:19 crc kubenswrapper[4865]: W0123 12:28:19.136345 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f0de315_0274_43da_a14b_ce2dffe44752.slice/crio-e290b319362dc9e99d0252cb8023b95c030aa1676b8068c8fb1a914ccb57ace9 WatchSource:0}: Error finding container e290b319362dc9e99d0252cb8023b95c030aa1676b8068c8fb1a914ccb57ace9: Status 404 returned error can't find the container with id e290b319362dc9e99d0252cb8023b95c030aa1676b8068c8fb1a914ccb57ace9 Jan 23 12:28:20 crc kubenswrapper[4865]: I0123 12:28:20.142413 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" event={"ID":"8f0de315-0274-43da-a14b-ce2dffe44752","Type":"ContainerStarted","Data":"a0591d7b5dce95ba543af8129b77fc2a462873c34202b7d7f253fd15fa5e070d"} Jan 23 12:28:20 crc kubenswrapper[4865]: I0123 12:28:20.143150 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" event={"ID":"8f0de315-0274-43da-a14b-ce2dffe44752","Type":"ContainerStarted","Data":"e290b319362dc9e99d0252cb8023b95c030aa1676b8068c8fb1a914ccb57ace9"} Jan 23 12:28:20 crc kubenswrapper[4865]: I0123 12:28:20.169135 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" podStartSLOduration=1.605797988 podStartE2EDuration="2.169110503s" podCreationTimestamp="2026-01-23 12:28:18 +0000 UTC" firstStartedPulling="2026-01-23 12:28:19.139706187 +0000 UTC m=+2143.308778413" lastFinishedPulling="2026-01-23 12:28:19.703018662 +0000 UTC m=+2143.872090928" observedRunningTime="2026-01-23 12:28:20.167452112 +0000 UTC m=+2144.336524338" watchObservedRunningTime="2026-01-23 12:28:20.169110503 +0000 UTC m=+2144.338182759" Jan 23 12:28:28 crc kubenswrapper[4865]: I0123 12:28:28.237090 4865 generic.go:334] "Generic (PLEG): container finished" podID="8f0de315-0274-43da-a14b-ce2dffe44752" containerID="a0591d7b5dce95ba543af8129b77fc2a462873c34202b7d7f253fd15fa5e070d" exitCode=0 Jan 23 12:28:28 crc kubenswrapper[4865]: I0123 12:28:28.237158 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" event={"ID":"8f0de315-0274-43da-a14b-ce2dffe44752","Type":"ContainerDied","Data":"a0591d7b5dce95ba543af8129b77fc2a462873c34202b7d7f253fd15fa5e070d"} Jan 23 12:28:29 crc kubenswrapper[4865]: I0123 12:28:29.742940 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:29 crc kubenswrapper[4865]: I0123 12:28:29.915023 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-ssh-key-openstack-edpm-ipam\") pod \"8f0de315-0274-43da-a14b-ce2dffe44752\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " Jan 23 12:28:29 crc kubenswrapper[4865]: I0123 12:28:29.915266 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk2pf\" (UniqueName: \"kubernetes.io/projected/8f0de315-0274-43da-a14b-ce2dffe44752-kube-api-access-bk2pf\") pod \"8f0de315-0274-43da-a14b-ce2dffe44752\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " Jan 23 12:28:29 crc kubenswrapper[4865]: I0123 12:28:29.915327 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-inventory-0\") pod \"8f0de315-0274-43da-a14b-ce2dffe44752\" (UID: \"8f0de315-0274-43da-a14b-ce2dffe44752\") " Jan 23 12:28:29 crc kubenswrapper[4865]: I0123 12:28:29.924539 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0de315-0274-43da-a14b-ce2dffe44752-kube-api-access-bk2pf" (OuterVolumeSpecName: "kube-api-access-bk2pf") pod "8f0de315-0274-43da-a14b-ce2dffe44752" (UID: "8f0de315-0274-43da-a14b-ce2dffe44752"). InnerVolumeSpecName "kube-api-access-bk2pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:28:29 crc kubenswrapper[4865]: I0123 12:28:29.944465 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8f0de315-0274-43da-a14b-ce2dffe44752" (UID: "8f0de315-0274-43da-a14b-ce2dffe44752"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:28:29 crc kubenswrapper[4865]: I0123 12:28:29.955637 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8f0de315-0274-43da-a14b-ce2dffe44752" (UID: "8f0de315-0274-43da-a14b-ce2dffe44752"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.018691 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk2pf\" (UniqueName: \"kubernetes.io/projected/8f0de315-0274-43da-a14b-ce2dffe44752-kube-api-access-bk2pf\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.019046 4865 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.019240 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f0de315-0274-43da-a14b-ce2dffe44752-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.259898 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" event={"ID":"8f0de315-0274-43da-a14b-ce2dffe44752","Type":"ContainerDied","Data":"e290b319362dc9e99d0252cb8023b95c030aa1676b8068c8fb1a914ccb57ace9"} Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.259944 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e290b319362dc9e99d0252cb8023b95c030aa1676b8068c8fb1a914ccb57ace9" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.260082 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-zjxgt" Jan 23 12:28:30 crc kubenswrapper[4865]: E0123 12:28:30.278827 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f0de315_0274_43da_a14b_ce2dffe44752.slice\": RecentStats: unable to find data in memory cache]" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.343685 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq"] Jan 23 12:28:30 crc kubenswrapper[4865]: E0123 12:28:30.344634 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0de315-0274-43da-a14b-ce2dffe44752" containerName="ssh-known-hosts-edpm-deployment" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.344661 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0de315-0274-43da-a14b-ce2dffe44752" containerName="ssh-known-hosts-edpm-deployment" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.345025 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0de315-0274-43da-a14b-ce2dffe44752" containerName="ssh-known-hosts-edpm-deployment" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.346022 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.351143 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.351172 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.351443 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.351699 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.378510 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq"] Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.526334 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxw86\" (UniqueName: \"kubernetes.io/projected/647473f3-db71-4daf-b4a6-0abc4af6e53d-kube-api-access-jxw86\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-87llq\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.527086 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-87llq\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.527292 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-87llq\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.629219 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-87llq\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.629290 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-87llq\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.629366 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxw86\" (UniqueName: \"kubernetes.io/projected/647473f3-db71-4daf-b4a6-0abc4af6e53d-kube-api-access-jxw86\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-87llq\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.637587 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-87llq\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.638218 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-87llq\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.650096 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxw86\" (UniqueName: \"kubernetes.io/projected/647473f3-db71-4daf-b4a6-0abc4af6e53d-kube-api-access-jxw86\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-87llq\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:30 crc kubenswrapper[4865]: I0123 12:28:30.662611 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:31 crc kubenswrapper[4865]: I0123 12:28:31.258775 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq"] Jan 23 12:28:31 crc kubenswrapper[4865]: W0123 12:28:31.259214 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod647473f3_db71_4daf_b4a6_0abc4af6e53d.slice/crio-72790d1a58f1e304f12e31c260c972aa6c2b2a2d8819219ac22fa915fa49cd38 WatchSource:0}: Error finding container 72790d1a58f1e304f12e31c260c972aa6c2b2a2d8819219ac22fa915fa49cd38: Status 404 returned error can't find the container with id 72790d1a58f1e304f12e31c260c972aa6c2b2a2d8819219ac22fa915fa49cd38 Jan 23 12:28:31 crc kubenswrapper[4865]: I0123 12:28:31.269047 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" event={"ID":"647473f3-db71-4daf-b4a6-0abc4af6e53d","Type":"ContainerStarted","Data":"72790d1a58f1e304f12e31c260c972aa6c2b2a2d8819219ac22fa915fa49cd38"} Jan 23 12:28:33 crc kubenswrapper[4865]: I0123 12:28:33.286493 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" event={"ID":"647473f3-db71-4daf-b4a6-0abc4af6e53d","Type":"ContainerStarted","Data":"08cbf7f8d17fdb85f09676156b885ef283b8d7837c2899d305a045d9bb641945"} Jan 23 12:28:33 crc kubenswrapper[4865]: I0123 12:28:33.309786 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" podStartSLOduration=2.462106516 podStartE2EDuration="3.309751272s" podCreationTimestamp="2026-01-23 12:28:30 +0000 UTC" firstStartedPulling="2026-01-23 12:28:31.262085791 +0000 UTC m=+2155.431158017" lastFinishedPulling="2026-01-23 12:28:32.109730547 +0000 UTC m=+2156.278802773" observedRunningTime="2026-01-23 12:28:33.307444505 +0000 UTC m=+2157.476516741" watchObservedRunningTime="2026-01-23 12:28:33.309751272 +0000 UTC m=+2157.478823508" Jan 23 12:28:42 crc kubenswrapper[4865]: I0123 12:28:42.380096 4865 generic.go:334] "Generic (PLEG): container finished" podID="647473f3-db71-4daf-b4a6-0abc4af6e53d" containerID="08cbf7f8d17fdb85f09676156b885ef283b8d7837c2899d305a045d9bb641945" exitCode=0 Jan 23 12:28:42 crc kubenswrapper[4865]: I0123 12:28:42.380169 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" event={"ID":"647473f3-db71-4daf-b4a6-0abc4af6e53d","Type":"ContainerDied","Data":"08cbf7f8d17fdb85f09676156b885ef283b8d7837c2899d305a045d9bb641945"} Jan 23 12:28:43 crc kubenswrapper[4865]: I0123 12:28:43.754773 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:43 crc kubenswrapper[4865]: I0123 12:28:43.870040 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-ssh-key-openstack-edpm-ipam\") pod \"647473f3-db71-4daf-b4a6-0abc4af6e53d\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " Jan 23 12:28:43 crc kubenswrapper[4865]: I0123 12:28:43.870989 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-inventory\") pod \"647473f3-db71-4daf-b4a6-0abc4af6e53d\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " Jan 23 12:28:43 crc kubenswrapper[4865]: I0123 12:28:43.871164 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxw86\" (UniqueName: \"kubernetes.io/projected/647473f3-db71-4daf-b4a6-0abc4af6e53d-kube-api-access-jxw86\") pod \"647473f3-db71-4daf-b4a6-0abc4af6e53d\" (UID: \"647473f3-db71-4daf-b4a6-0abc4af6e53d\") " Jan 23 12:28:43 crc kubenswrapper[4865]: I0123 12:28:43.879528 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647473f3-db71-4daf-b4a6-0abc4af6e53d-kube-api-access-jxw86" (OuterVolumeSpecName: "kube-api-access-jxw86") pod "647473f3-db71-4daf-b4a6-0abc4af6e53d" (UID: "647473f3-db71-4daf-b4a6-0abc4af6e53d"). InnerVolumeSpecName "kube-api-access-jxw86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:28:43 crc kubenswrapper[4865]: I0123 12:28:43.898699 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "647473f3-db71-4daf-b4a6-0abc4af6e53d" (UID: "647473f3-db71-4daf-b4a6-0abc4af6e53d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:28:43 crc kubenswrapper[4865]: I0123 12:28:43.906972 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-inventory" (OuterVolumeSpecName: "inventory") pod "647473f3-db71-4daf-b4a6-0abc4af6e53d" (UID: "647473f3-db71-4daf-b4a6-0abc4af6e53d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:28:43 crc kubenswrapper[4865]: I0123 12:28:43.974658 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:43 crc kubenswrapper[4865]: I0123 12:28:43.975002 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxw86\" (UniqueName: \"kubernetes.io/projected/647473f3-db71-4daf-b4a6-0abc4af6e53d-kube-api-access-jxw86\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:43 crc kubenswrapper[4865]: I0123 12:28:43.975176 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/647473f3-db71-4daf-b4a6-0abc4af6e53d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.401778 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" event={"ID":"647473f3-db71-4daf-b4a6-0abc4af6e53d","Type":"ContainerDied","Data":"72790d1a58f1e304f12e31c260c972aa6c2b2a2d8819219ac22fa915fa49cd38"} Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.401841 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72790d1a58f1e304f12e31c260c972aa6c2b2a2d8819219ac22fa915fa49cd38" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.401936 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-87llq" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.513657 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b"] Jan 23 12:28:44 crc kubenswrapper[4865]: E0123 12:28:44.514238 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647473f3-db71-4daf-b4a6-0abc4af6e53d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.514268 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="647473f3-db71-4daf-b4a6-0abc4af6e53d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.514573 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="647473f3-db71-4daf-b4a6-0abc4af6e53d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.515356 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.520364 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.520849 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.521189 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.534255 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.538801 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b"] Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.588288 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.588355 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.588386 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp27g\" (UniqueName: \"kubernetes.io/projected/8f947349-0885-4c74-994f-bb87c2cf2834-kube-api-access-tp27g\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.690542 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.690681 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.690740 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp27g\" (UniqueName: \"kubernetes.io/projected/8f947349-0885-4c74-994f-bb87c2cf2834-kube-api-access-tp27g\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.695518 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.696085 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.712133 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp27g\" (UniqueName: \"kubernetes.io/projected/8f947349-0885-4c74-994f-bb87c2cf2834-kube-api-access-tp27g\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:44 crc kubenswrapper[4865]: I0123 12:28:44.834082 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:45 crc kubenswrapper[4865]: I0123 12:28:45.403678 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b"] Jan 23 12:28:45 crc kubenswrapper[4865]: W0123 12:28:45.408236 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f947349_0885_4c74_994f_bb87c2cf2834.slice/crio-4dd7d63aa7e0f54e2c62093671e2bdbe396a9b704c14b90d921494cb3ab56418 WatchSource:0}: Error finding container 4dd7d63aa7e0f54e2c62093671e2bdbe396a9b704c14b90d921494cb3ab56418: Status 404 returned error can't find the container with id 4dd7d63aa7e0f54e2c62093671e2bdbe396a9b704c14b90d921494cb3ab56418 Jan 23 12:28:46 crc kubenswrapper[4865]: I0123 12:28:46.419322 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" event={"ID":"8f947349-0885-4c74-994f-bb87c2cf2834","Type":"ContainerStarted","Data":"d9248193d9957a417fa955aecfb4e633ed2ebc406aae267bacacdee6978be22a"} Jan 23 12:28:46 crc kubenswrapper[4865]: I0123 12:28:46.419674 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" event={"ID":"8f947349-0885-4c74-994f-bb87c2cf2834","Type":"ContainerStarted","Data":"4dd7d63aa7e0f54e2c62093671e2bdbe396a9b704c14b90d921494cb3ab56418"} Jan 23 12:28:46 crc kubenswrapper[4865]: I0123 12:28:46.440688 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" podStartSLOduration=1.998726488 podStartE2EDuration="2.440668345s" podCreationTimestamp="2026-01-23 12:28:44 +0000 UTC" firstStartedPulling="2026-01-23 12:28:45.410412549 +0000 UTC m=+2169.579484775" lastFinishedPulling="2026-01-23 12:28:45.852354406 +0000 UTC m=+2170.021426632" observedRunningTime="2026-01-23 12:28:46.437459688 +0000 UTC m=+2170.606531914" watchObservedRunningTime="2026-01-23 12:28:46.440668345 +0000 UTC m=+2170.609740571" Jan 23 12:28:48 crc kubenswrapper[4865]: I0123 12:28:48.776096 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:28:48 crc kubenswrapper[4865]: I0123 12:28:48.776144 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:28:57 crc kubenswrapper[4865]: I0123 12:28:57.514926 4865 generic.go:334] "Generic (PLEG): container finished" podID="8f947349-0885-4c74-994f-bb87c2cf2834" containerID="d9248193d9957a417fa955aecfb4e633ed2ebc406aae267bacacdee6978be22a" exitCode=0 Jan 23 12:28:57 crc kubenswrapper[4865]: I0123 12:28:57.515017 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" event={"ID":"8f947349-0885-4c74-994f-bb87c2cf2834","Type":"ContainerDied","Data":"d9248193d9957a417fa955aecfb4e633ed2ebc406aae267bacacdee6978be22a"} Jan 23 12:28:58 crc kubenswrapper[4865]: I0123 12:28:58.958094 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.064035 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp27g\" (UniqueName: \"kubernetes.io/projected/8f947349-0885-4c74-994f-bb87c2cf2834-kube-api-access-tp27g\") pod \"8f947349-0885-4c74-994f-bb87c2cf2834\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.064256 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-ssh-key-openstack-edpm-ipam\") pod \"8f947349-0885-4c74-994f-bb87c2cf2834\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.064341 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-inventory\") pod \"8f947349-0885-4c74-994f-bb87c2cf2834\" (UID: \"8f947349-0885-4c74-994f-bb87c2cf2834\") " Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.070466 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f947349-0885-4c74-994f-bb87c2cf2834-kube-api-access-tp27g" (OuterVolumeSpecName: "kube-api-access-tp27g") pod "8f947349-0885-4c74-994f-bb87c2cf2834" (UID: "8f947349-0885-4c74-994f-bb87c2cf2834"). InnerVolumeSpecName "kube-api-access-tp27g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.093043 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-inventory" (OuterVolumeSpecName: "inventory") pod "8f947349-0885-4c74-994f-bb87c2cf2834" (UID: "8f947349-0885-4c74-994f-bb87c2cf2834"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.095188 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8f947349-0885-4c74-994f-bb87c2cf2834" (UID: "8f947349-0885-4c74-994f-bb87c2cf2834"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.166964 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.167144 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8f947349-0885-4c74-994f-bb87c2cf2834-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.167236 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp27g\" (UniqueName: \"kubernetes.io/projected/8f947349-0885-4c74-994f-bb87c2cf2834-kube-api-access-tp27g\") on node \"crc\" DevicePath \"\"" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.536118 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" event={"ID":"8f947349-0885-4c74-994f-bb87c2cf2834","Type":"ContainerDied","Data":"4dd7d63aa7e0f54e2c62093671e2bdbe396a9b704c14b90d921494cb3ab56418"} Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.536169 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dd7d63aa7e0f54e2c62093671e2bdbe396a9b704c14b90d921494cb3ab56418" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.536221 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5sl6b" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.727077 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6"] Jan 23 12:28:59 crc kubenswrapper[4865]: E0123 12:28:59.727864 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f947349-0885-4c74-994f-bb87c2cf2834" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.727887 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f947349-0885-4c74-994f-bb87c2cf2834" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.728129 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f947349-0885-4c74-994f-bb87c2cf2834" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.728878 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.732376 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.732563 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.732701 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.732811 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.732912 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.733065 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.736244 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.736418 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.766540 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6"] Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779356 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779427 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779473 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779520 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779553 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779577 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779619 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779680 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phwnh\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-kube-api-access-phwnh\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779702 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779723 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779749 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779771 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779803 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.779827 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.881198 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.881615 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.881723 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.881859 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.881982 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.882079 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.882154 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.882241 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.882337 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phwnh\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-kube-api-access-phwnh\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.882411 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.882481 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.883058 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.883165 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.883275 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.886500 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.887013 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.887420 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.889097 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.889501 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.889904 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.891179 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.891405 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.892674 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.893360 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.894749 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.895761 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.896008 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:28:59 crc kubenswrapper[4865]: I0123 12:28:59.905156 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phwnh\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-kube-api-access-phwnh\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:29:00 crc kubenswrapper[4865]: I0123 12:29:00.048905 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:29:00 crc kubenswrapper[4865]: I0123 12:29:00.668132 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6"] Jan 23 12:29:01 crc kubenswrapper[4865]: I0123 12:29:01.554237 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" event={"ID":"226dbf99-77b9-4239-b856-1ea3453f0f37","Type":"ContainerStarted","Data":"7dba1a291d507118e532c52ac3c5ae41907ac30ff9afe50c1d4e1aa4d05221df"} Jan 23 12:29:01 crc kubenswrapper[4865]: I0123 12:29:01.554785 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" event={"ID":"226dbf99-77b9-4239-b856-1ea3453f0f37","Type":"ContainerStarted","Data":"1e72f54f9cf2c755abc74ddd7c335276f0094cd48449a369b6c3af67ca6c9179"} Jan 23 12:29:01 crc kubenswrapper[4865]: I0123 12:29:01.573856 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" podStartSLOduration=2.105017851 podStartE2EDuration="2.573828648s" podCreationTimestamp="2026-01-23 12:28:59 +0000 UTC" firstStartedPulling="2026-01-23 12:29:00.675844005 +0000 UTC m=+2184.844916241" lastFinishedPulling="2026-01-23 12:29:01.144654812 +0000 UTC m=+2185.313727038" observedRunningTime="2026-01-23 12:29:01.572426865 +0000 UTC m=+2185.741499091" watchObservedRunningTime="2026-01-23 12:29:01.573828648 +0000 UTC m=+2185.742900874" Jan 23 12:29:18 crc kubenswrapper[4865]: I0123 12:29:18.776125 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:29:18 crc kubenswrapper[4865]: I0123 12:29:18.777025 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:29:18 crc kubenswrapper[4865]: I0123 12:29:18.777119 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:29:18 crc kubenswrapper[4865]: I0123 12:29:18.778566 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e0926f65a291664cd50747401caf9b0e97c344dae0b4a2a81e701f0c1468f90"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:29:18 crc kubenswrapper[4865]: I0123 12:29:18.778733 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://9e0926f65a291664cd50747401caf9b0e97c344dae0b4a2a81e701f0c1468f90" gracePeriod=600 Jan 23 12:29:19 crc kubenswrapper[4865]: I0123 12:29:19.705909 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="9e0926f65a291664cd50747401caf9b0e97c344dae0b4a2a81e701f0c1468f90" exitCode=0 Jan 23 12:29:19 crc kubenswrapper[4865]: I0123 12:29:19.705972 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"9e0926f65a291664cd50747401caf9b0e97c344dae0b4a2a81e701f0c1468f90"} Jan 23 12:29:19 crc kubenswrapper[4865]: I0123 12:29:19.706474 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06"} Jan 23 12:29:19 crc kubenswrapper[4865]: I0123 12:29:19.706498 4865 scope.go:117] "RemoveContainer" containerID="764e5d6422fbbbc9aa51ca3b8125a7c6b14895b296e9ba391c24c2f33664e725" Jan 23 12:29:45 crc kubenswrapper[4865]: I0123 12:29:45.020735 4865 generic.go:334] "Generic (PLEG): container finished" podID="226dbf99-77b9-4239-b856-1ea3453f0f37" containerID="7dba1a291d507118e532c52ac3c5ae41907ac30ff9afe50c1d4e1aa4d05221df" exitCode=0 Jan 23 12:29:45 crc kubenswrapper[4865]: I0123 12:29:45.020823 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" event={"ID":"226dbf99-77b9-4239-b856-1ea3453f0f37","Type":"ContainerDied","Data":"7dba1a291d507118e532c52ac3c5ae41907ac30ff9afe50c1d4e1aa4d05221df"} Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.471620 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.602705 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.602804 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-nova-combined-ca-bundle\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.602890 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.602924 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ssh-key-openstack-edpm-ipam\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.602958 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-repo-setup-combined-ca-bundle\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.602984 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-inventory\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.603027 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-ovn-default-certs-0\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.603086 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.603114 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-neutron-metadata-combined-ca-bundle\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.603159 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ovn-combined-ca-bundle\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.603190 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-telemetry-combined-ca-bundle\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.603286 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-libvirt-combined-ca-bundle\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.603354 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-bootstrap-combined-ca-bundle\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.603383 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phwnh\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-kube-api-access-phwnh\") pod \"226dbf99-77b9-4239-b856-1ea3453f0f37\" (UID: \"226dbf99-77b9-4239-b856-1ea3453f0f37\") " Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.609090 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.609328 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.609686 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.612115 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.612287 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.612809 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.613442 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.614116 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-kube-api-access-phwnh" (OuterVolumeSpecName: "kube-api-access-phwnh") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "kube-api-access-phwnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.615463 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.616163 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.618507 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.621736 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.637868 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-inventory" (OuterVolumeSpecName: "inventory") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.650633 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "226dbf99-77b9-4239-b856-1ea3453f0f37" (UID: "226dbf99-77b9-4239-b856-1ea3453f0f37"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705094 4865 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705272 4865 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705374 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phwnh\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-kube-api-access-phwnh\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705440 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705496 4865 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705559 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705657 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705714 4865 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705784 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705841 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705901 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/226dbf99-77b9-4239-b856-1ea3453f0f37-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.705965 4865 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.706018 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:46 crc kubenswrapper[4865]: I0123 12:29:46.706109 4865 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/226dbf99-77b9-4239-b856-1ea3453f0f37-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.038456 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" event={"ID":"226dbf99-77b9-4239-b856-1ea3453f0f37","Type":"ContainerDied","Data":"1e72f54f9cf2c755abc74ddd7c335276f0094cd48449a369b6c3af67ca6c9179"} Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.038536 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e72f54f9cf2c755abc74ddd7c335276f0094cd48449a369b6c3af67ca6c9179" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.038634 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g5cj6" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.169168 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm"] Jan 23 12:29:47 crc kubenswrapper[4865]: E0123 12:29:47.213635 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="226dbf99-77b9-4239-b856-1ea3453f0f37" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.213672 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="226dbf99-77b9-4239-b856-1ea3453f0f37" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.214019 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="226dbf99-77b9-4239-b856-1ea3453f0f37" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.214879 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.217863 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.218227 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.218538 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.218856 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.220330 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.223036 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm"] Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.317467 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.317650 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e32a1952-2f35-4d48-ad11-6d569504f572-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.317727 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.317778 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.317808 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpz6x\" (UniqueName: \"kubernetes.io/projected/e32a1952-2f35-4d48-ad11-6d569504f572-kube-api-access-jpz6x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.419322 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.419405 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.419432 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpz6x\" (UniqueName: \"kubernetes.io/projected/e32a1952-2f35-4d48-ad11-6d569504f572-kube-api-access-jpz6x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.419499 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.419555 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e32a1952-2f35-4d48-ad11-6d569504f572-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.420398 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e32a1952-2f35-4d48-ad11-6d569504f572-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.430242 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.431219 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.436490 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.436945 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpz6x\" (UniqueName: \"kubernetes.io/projected/e32a1952-2f35-4d48-ad11-6d569504f572-kube-api-access-jpz6x\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-czrfm\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:47 crc kubenswrapper[4865]: I0123 12:29:47.544940 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:29:48 crc kubenswrapper[4865]: I0123 12:29:48.054298 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm"] Jan 23 12:29:49 crc kubenswrapper[4865]: I0123 12:29:49.064578 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" event={"ID":"e32a1952-2f35-4d48-ad11-6d569504f572","Type":"ContainerStarted","Data":"56619e637289fefea4a0ef974059ea1ead6a619ab819446486a99854b41a918f"} Jan 23 12:29:49 crc kubenswrapper[4865]: I0123 12:29:49.064908 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" event={"ID":"e32a1952-2f35-4d48-ad11-6d569504f572","Type":"ContainerStarted","Data":"dc69e1e610edc1ef04be65fba793be22979fb3a8939ba8d4f7bfe47e1e31fd66"} Jan 23 12:29:49 crc kubenswrapper[4865]: I0123 12:29:49.091040 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" podStartSLOduration=1.6468789990000001 podStartE2EDuration="2.090987046s" podCreationTimestamp="2026-01-23 12:29:47 +0000 UTC" firstStartedPulling="2026-01-23 12:29:48.065812672 +0000 UTC m=+2232.234884918" lastFinishedPulling="2026-01-23 12:29:48.509920739 +0000 UTC m=+2232.678992965" observedRunningTime="2026-01-23 12:29:49.087258925 +0000 UTC m=+2233.256331151" watchObservedRunningTime="2026-01-23 12:29:49.090987046 +0000 UTC m=+2233.260059272" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.149963 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k"] Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.151720 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.158389 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.159081 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.175581 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d524ba94-3359-41d2-a515-056bb6331dde-config-volume\") pod \"collect-profiles-29486190-mz84k\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.176068 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqz9q\" (UniqueName: \"kubernetes.io/projected/d524ba94-3359-41d2-a515-056bb6331dde-kube-api-access-sqz9q\") pod \"collect-profiles-29486190-mz84k\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.176306 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d524ba94-3359-41d2-a515-056bb6331dde-secret-volume\") pod \"collect-profiles-29486190-mz84k\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.214459 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k"] Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.277339 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d524ba94-3359-41d2-a515-056bb6331dde-secret-volume\") pod \"collect-profiles-29486190-mz84k\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.277843 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d524ba94-3359-41d2-a515-056bb6331dde-config-volume\") pod \"collect-profiles-29486190-mz84k\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.277876 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqz9q\" (UniqueName: \"kubernetes.io/projected/d524ba94-3359-41d2-a515-056bb6331dde-kube-api-access-sqz9q\") pod \"collect-profiles-29486190-mz84k\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.280378 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d524ba94-3359-41d2-a515-056bb6331dde-config-volume\") pod \"collect-profiles-29486190-mz84k\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.295618 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d524ba94-3359-41d2-a515-056bb6331dde-secret-volume\") pod \"collect-profiles-29486190-mz84k\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.299737 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqz9q\" (UniqueName: \"kubernetes.io/projected/d524ba94-3359-41d2-a515-056bb6331dde-kube-api-access-sqz9q\") pod \"collect-profiles-29486190-mz84k\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:00 crc kubenswrapper[4865]: I0123 12:30:00.523683 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:01 crc kubenswrapper[4865]: I0123 12:30:01.003886 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k"] Jan 23 12:30:01 crc kubenswrapper[4865]: I0123 12:30:01.167044 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" event={"ID":"d524ba94-3359-41d2-a515-056bb6331dde","Type":"ContainerStarted","Data":"7e20bacaba191aed072a800653b0fe4e9f2ea2107223dfeee10825a67afbd7eb"} Jan 23 12:30:01 crc kubenswrapper[4865]: I0123 12:30:01.167384 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" event={"ID":"d524ba94-3359-41d2-a515-056bb6331dde","Type":"ContainerStarted","Data":"a21fb5e0a93cf0c2fdb1cf39ae726070affb1b1c0b2b29192ae75d0af8f52fd2"} Jan 23 12:30:01 crc kubenswrapper[4865]: I0123 12:30:01.186198 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" podStartSLOduration=1.186179388 podStartE2EDuration="1.186179388s" podCreationTimestamp="2026-01-23 12:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 12:30:01.183291748 +0000 UTC m=+2245.352363984" watchObservedRunningTime="2026-01-23 12:30:01.186179388 +0000 UTC m=+2245.355251614" Jan 23 12:30:02 crc kubenswrapper[4865]: I0123 12:30:02.187246 4865 generic.go:334] "Generic (PLEG): container finished" podID="d524ba94-3359-41d2-a515-056bb6331dde" containerID="7e20bacaba191aed072a800653b0fe4e9f2ea2107223dfeee10825a67afbd7eb" exitCode=0 Jan 23 12:30:02 crc kubenswrapper[4865]: I0123 12:30:02.187300 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" event={"ID":"d524ba94-3359-41d2-a515-056bb6331dde","Type":"ContainerDied","Data":"7e20bacaba191aed072a800653b0fe4e9f2ea2107223dfeee10825a67afbd7eb"} Jan 23 12:30:03 crc kubenswrapper[4865]: I0123 12:30:03.562217 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:03 crc kubenswrapper[4865]: I0123 12:30:03.746567 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d524ba94-3359-41d2-a515-056bb6331dde-secret-volume\") pod \"d524ba94-3359-41d2-a515-056bb6331dde\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " Jan 23 12:30:03 crc kubenswrapper[4865]: I0123 12:30:03.746837 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d524ba94-3359-41d2-a515-056bb6331dde-config-volume\") pod \"d524ba94-3359-41d2-a515-056bb6331dde\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " Jan 23 12:30:03 crc kubenswrapper[4865]: I0123 12:30:03.747144 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqz9q\" (UniqueName: \"kubernetes.io/projected/d524ba94-3359-41d2-a515-056bb6331dde-kube-api-access-sqz9q\") pod \"d524ba94-3359-41d2-a515-056bb6331dde\" (UID: \"d524ba94-3359-41d2-a515-056bb6331dde\") " Jan 23 12:30:03 crc kubenswrapper[4865]: I0123 12:30:03.748118 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d524ba94-3359-41d2-a515-056bb6331dde-config-volume" (OuterVolumeSpecName: "config-volume") pod "d524ba94-3359-41d2-a515-056bb6331dde" (UID: "d524ba94-3359-41d2-a515-056bb6331dde"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:30:03 crc kubenswrapper[4865]: I0123 12:30:03.756591 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d524ba94-3359-41d2-a515-056bb6331dde-kube-api-access-sqz9q" (OuterVolumeSpecName: "kube-api-access-sqz9q") pod "d524ba94-3359-41d2-a515-056bb6331dde" (UID: "d524ba94-3359-41d2-a515-056bb6331dde"). InnerVolumeSpecName "kube-api-access-sqz9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:30:03 crc kubenswrapper[4865]: I0123 12:30:03.757230 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d524ba94-3359-41d2-a515-056bb6331dde-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d524ba94-3359-41d2-a515-056bb6331dde" (UID: "d524ba94-3359-41d2-a515-056bb6331dde"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:30:03 crc kubenswrapper[4865]: I0123 12:30:03.849392 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqz9q\" (UniqueName: \"kubernetes.io/projected/d524ba94-3359-41d2-a515-056bb6331dde-kube-api-access-sqz9q\") on node \"crc\" DevicePath \"\"" Jan 23 12:30:03 crc kubenswrapper[4865]: I0123 12:30:03.849586 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d524ba94-3359-41d2-a515-056bb6331dde-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 12:30:03 crc kubenswrapper[4865]: I0123 12:30:03.849673 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d524ba94-3359-41d2-a515-056bb6331dde-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 12:30:04 crc kubenswrapper[4865]: I0123 12:30:04.207413 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" event={"ID":"d524ba94-3359-41d2-a515-056bb6331dde","Type":"ContainerDied","Data":"a21fb5e0a93cf0c2fdb1cf39ae726070affb1b1c0b2b29192ae75d0af8f52fd2"} Jan 23 12:30:04 crc kubenswrapper[4865]: I0123 12:30:04.208095 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a21fb5e0a93cf0c2fdb1cf39ae726070affb1b1c0b2b29192ae75d0af8f52fd2" Jan 23 12:30:04 crc kubenswrapper[4865]: I0123 12:30:04.207533 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486190-mz84k" Jan 23 12:30:04 crc kubenswrapper[4865]: I0123 12:30:04.267929 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s"] Jan 23 12:30:04 crc kubenswrapper[4865]: I0123 12:30:04.277761 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486145-rzp9s"] Jan 23 12:30:06 crc kubenswrapper[4865]: I0123 12:30:06.133125 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60cff5f-ff90-4d9a-9980-f2d0ebce2aed" path="/var/lib/kubelet/pods/b60cff5f-ff90-4d9a-9980-f2d0ebce2aed/volumes" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.574925 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nc4rf"] Jan 23 12:30:35 crc kubenswrapper[4865]: E0123 12:30:35.575959 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d524ba94-3359-41d2-a515-056bb6331dde" containerName="collect-profiles" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.575976 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d524ba94-3359-41d2-a515-056bb6331dde" containerName="collect-profiles" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.576286 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d524ba94-3359-41d2-a515-056bb6331dde" containerName="collect-profiles" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.583098 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.596059 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nc4rf"] Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.731863 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-utilities\") pod \"redhat-operators-nc4rf\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.731959 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-catalog-content\") pod \"redhat-operators-nc4rf\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.733868 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2cwp\" (UniqueName: \"kubernetes.io/projected/b1242668-df22-4d55-a7de-651dd7de89fd-kube-api-access-m2cwp\") pod \"redhat-operators-nc4rf\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.835857 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2cwp\" (UniqueName: \"kubernetes.io/projected/b1242668-df22-4d55-a7de-651dd7de89fd-kube-api-access-m2cwp\") pod \"redhat-operators-nc4rf\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.836011 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-utilities\") pod \"redhat-operators-nc4rf\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.836060 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-catalog-content\") pod \"redhat-operators-nc4rf\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.836629 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-catalog-content\") pod \"redhat-operators-nc4rf\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.836649 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-utilities\") pod \"redhat-operators-nc4rf\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.869039 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2cwp\" (UniqueName: \"kubernetes.io/projected/b1242668-df22-4d55-a7de-651dd7de89fd-kube-api-access-m2cwp\") pod \"redhat-operators-nc4rf\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:35 crc kubenswrapper[4865]: I0123 12:30:35.914238 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:36 crc kubenswrapper[4865]: I0123 12:30:36.416044 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nc4rf"] Jan 23 12:30:36 crc kubenswrapper[4865]: I0123 12:30:36.504361 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc4rf" event={"ID":"b1242668-df22-4d55-a7de-651dd7de89fd","Type":"ContainerStarted","Data":"cce771c3ff2e5d2b3a5dff12df367da5077253344329b77ea60b17ee43ec0c98"} Jan 23 12:30:37 crc kubenswrapper[4865]: I0123 12:30:37.513422 4865 generic.go:334] "Generic (PLEG): container finished" podID="b1242668-df22-4d55-a7de-651dd7de89fd" containerID="cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c" exitCode=0 Jan 23 12:30:37 crc kubenswrapper[4865]: I0123 12:30:37.513497 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc4rf" event={"ID":"b1242668-df22-4d55-a7de-651dd7de89fd","Type":"ContainerDied","Data":"cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c"} Jan 23 12:30:38 crc kubenswrapper[4865]: I0123 12:30:38.523694 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc4rf" event={"ID":"b1242668-df22-4d55-a7de-651dd7de89fd","Type":"ContainerStarted","Data":"4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9"} Jan 23 12:30:43 crc kubenswrapper[4865]: I0123 12:30:43.566685 4865 generic.go:334] "Generic (PLEG): container finished" podID="b1242668-df22-4d55-a7de-651dd7de89fd" containerID="4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9" exitCode=0 Jan 23 12:30:43 crc kubenswrapper[4865]: I0123 12:30:43.566740 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc4rf" event={"ID":"b1242668-df22-4d55-a7de-651dd7de89fd","Type":"ContainerDied","Data":"4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9"} Jan 23 12:30:44 crc kubenswrapper[4865]: I0123 12:30:44.575983 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc4rf" event={"ID":"b1242668-df22-4d55-a7de-651dd7de89fd","Type":"ContainerStarted","Data":"b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90"} Jan 23 12:30:44 crc kubenswrapper[4865]: I0123 12:30:44.595561 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nc4rf" podStartSLOduration=3.082177123 podStartE2EDuration="9.595542681s" podCreationTimestamp="2026-01-23 12:30:35 +0000 UTC" firstStartedPulling="2026-01-23 12:30:37.5168463 +0000 UTC m=+2281.685918526" lastFinishedPulling="2026-01-23 12:30:44.030211858 +0000 UTC m=+2288.199284084" observedRunningTime="2026-01-23 12:30:44.591117173 +0000 UTC m=+2288.760189399" watchObservedRunningTime="2026-01-23 12:30:44.595542681 +0000 UTC m=+2288.764614917" Jan 23 12:30:45 crc kubenswrapper[4865]: I0123 12:30:45.914500 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:45 crc kubenswrapper[4865]: I0123 12:30:45.914568 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:30:46 crc kubenswrapper[4865]: I0123 12:30:46.965309 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nc4rf" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" containerName="registry-server" probeResult="failure" output=< Jan 23 12:30:46 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:30:46 crc kubenswrapper[4865]: > Jan 23 12:30:53 crc kubenswrapper[4865]: I0123 12:30:53.708253 4865 scope.go:117] "RemoveContainer" containerID="e9c3e1560f5b66efcbb55cce9e1082cfa19890fbcae89e199e02c135ef2d6496" Jan 23 12:30:56 crc kubenswrapper[4865]: I0123 12:30:56.961415 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nc4rf" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" containerName="registry-server" probeResult="failure" output=< Jan 23 12:30:56 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:30:56 crc kubenswrapper[4865]: > Jan 23 12:31:03 crc kubenswrapper[4865]: I0123 12:31:03.766105 4865 generic.go:334] "Generic (PLEG): container finished" podID="e32a1952-2f35-4d48-ad11-6d569504f572" containerID="56619e637289fefea4a0ef974059ea1ead6a619ab819446486a99854b41a918f" exitCode=0 Jan 23 12:31:03 crc kubenswrapper[4865]: I0123 12:31:03.766216 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" event={"ID":"e32a1952-2f35-4d48-ad11-6d569504f572","Type":"ContainerDied","Data":"56619e637289fefea4a0ef974059ea1ead6a619ab819446486a99854b41a918f"} Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.298825 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.349823 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ssh-key-openstack-edpm-ipam\") pod \"e32a1952-2f35-4d48-ad11-6d569504f572\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.349872 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e32a1952-2f35-4d48-ad11-6d569504f572-ovncontroller-config-0\") pod \"e32a1952-2f35-4d48-ad11-6d569504f572\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.349906 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ovn-combined-ca-bundle\") pod \"e32a1952-2f35-4d48-ad11-6d569504f572\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.349941 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-inventory\") pod \"e32a1952-2f35-4d48-ad11-6d569504f572\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.351530 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpz6x\" (UniqueName: \"kubernetes.io/projected/e32a1952-2f35-4d48-ad11-6d569504f572-kube-api-access-jpz6x\") pod \"e32a1952-2f35-4d48-ad11-6d569504f572\" (UID: \"e32a1952-2f35-4d48-ad11-6d569504f572\") " Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.356245 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e32a1952-2f35-4d48-ad11-6d569504f572" (UID: "e32a1952-2f35-4d48-ad11-6d569504f572"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.357591 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e32a1952-2f35-4d48-ad11-6d569504f572-kube-api-access-jpz6x" (OuterVolumeSpecName: "kube-api-access-jpz6x") pod "e32a1952-2f35-4d48-ad11-6d569504f572" (UID: "e32a1952-2f35-4d48-ad11-6d569504f572"). InnerVolumeSpecName "kube-api-access-jpz6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.379491 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-inventory" (OuterVolumeSpecName: "inventory") pod "e32a1952-2f35-4d48-ad11-6d569504f572" (UID: "e32a1952-2f35-4d48-ad11-6d569504f572"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.388285 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e32a1952-2f35-4d48-ad11-6d569504f572-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e32a1952-2f35-4d48-ad11-6d569504f572" (UID: "e32a1952-2f35-4d48-ad11-6d569504f572"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.398051 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e32a1952-2f35-4d48-ad11-6d569504f572" (UID: "e32a1952-2f35-4d48-ad11-6d569504f572"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.455681 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.455710 4865 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e32a1952-2f35-4d48-ad11-6d569504f572-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.455720 4865 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.455728 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e32a1952-2f35-4d48-ad11-6d569504f572-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.455736 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpz6x\" (UniqueName: \"kubernetes.io/projected/e32a1952-2f35-4d48-ad11-6d569504f572-kube-api-access-jpz6x\") on node \"crc\" DevicePath \"\"" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.786490 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" event={"ID":"e32a1952-2f35-4d48-ad11-6d569504f572","Type":"ContainerDied","Data":"dc69e1e610edc1ef04be65fba793be22979fb3a8939ba8d4f7bfe47e1e31fd66"} Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.786799 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc69e1e610edc1ef04be65fba793be22979fb3a8939ba8d4f7bfe47e1e31fd66" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.786549 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-czrfm" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.911259 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4"] Jan 23 12:31:05 crc kubenswrapper[4865]: E0123 12:31:05.911902 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e32a1952-2f35-4d48-ad11-6d569504f572" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.911988 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="e32a1952-2f35-4d48-ad11-6d569504f572" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.912248 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="e32a1952-2f35-4d48-ad11-6d569504f572" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.912938 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.919953 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.920306 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.920397 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.920383 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.921389 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.921626 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.955069 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4"] Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.963785 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.964083 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.964269 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.964429 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.964580 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds96r\" (UniqueName: \"kubernetes.io/projected/84909230-47a6-4048-9d60-f80aa8a987aa-kube-api-access-ds96r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:05 crc kubenswrapper[4865]: I0123 12:31:05.964685 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.030228 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.066036 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.066114 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.066162 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds96r\" (UniqueName: \"kubernetes.io/projected/84909230-47a6-4048-9d60-f80aa8a987aa-kube-api-access-ds96r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.066182 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.066225 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.066246 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.072730 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.073360 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.075058 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.076335 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.076453 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.084716 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds96r\" (UniqueName: \"kubernetes.io/projected/84909230-47a6-4048-9d60-f80aa8a987aa-kube-api-access-ds96r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.092131 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.231173 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.784733 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nc4rf"] Jan 23 12:31:06 crc kubenswrapper[4865]: I0123 12:31:06.792399 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4"] Jan 23 12:31:07 crc kubenswrapper[4865]: I0123 12:31:07.808961 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nc4rf" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" containerName="registry-server" containerID="cri-o://b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90" gracePeriod=2 Jan 23 12:31:07 crc kubenswrapper[4865]: I0123 12:31:07.809514 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" event={"ID":"84909230-47a6-4048-9d60-f80aa8a987aa","Type":"ContainerStarted","Data":"a2a030add419a4da82929cb9b31a811e7bca6d1cf1c236f686c2e9bba44304ce"} Jan 23 12:31:07 crc kubenswrapper[4865]: I0123 12:31:07.809537 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" event={"ID":"84909230-47a6-4048-9d60-f80aa8a987aa","Type":"ContainerStarted","Data":"c200d8d71e4d816f9d3eac55133eac3c831f1c2692571f0f970167422ebb16c8"} Jan 23 12:31:07 crc kubenswrapper[4865]: I0123 12:31:07.842243 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" podStartSLOduration=2.380865384 podStartE2EDuration="2.842219691s" podCreationTimestamp="2026-01-23 12:31:05 +0000 UTC" firstStartedPulling="2026-01-23 12:31:06.810148169 +0000 UTC m=+2310.979220395" lastFinishedPulling="2026-01-23 12:31:07.271502476 +0000 UTC m=+2311.440574702" observedRunningTime="2026-01-23 12:31:07.832958945 +0000 UTC m=+2312.002031161" watchObservedRunningTime="2026-01-23 12:31:07.842219691 +0000 UTC m=+2312.011291917" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.191157 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.222047 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-utilities\") pod \"b1242668-df22-4d55-a7de-651dd7de89fd\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.222271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-catalog-content\") pod \"b1242668-df22-4d55-a7de-651dd7de89fd\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.222424 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2cwp\" (UniqueName: \"kubernetes.io/projected/b1242668-df22-4d55-a7de-651dd7de89fd-kube-api-access-m2cwp\") pod \"b1242668-df22-4d55-a7de-651dd7de89fd\" (UID: \"b1242668-df22-4d55-a7de-651dd7de89fd\") " Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.222749 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-utilities" (OuterVolumeSpecName: "utilities") pod "b1242668-df22-4d55-a7de-651dd7de89fd" (UID: "b1242668-df22-4d55-a7de-651dd7de89fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.223148 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.227414 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1242668-df22-4d55-a7de-651dd7de89fd-kube-api-access-m2cwp" (OuterVolumeSpecName: "kube-api-access-m2cwp") pod "b1242668-df22-4d55-a7de-651dd7de89fd" (UID: "b1242668-df22-4d55-a7de-651dd7de89fd"). InnerVolumeSpecName "kube-api-access-m2cwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.324672 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2cwp\" (UniqueName: \"kubernetes.io/projected/b1242668-df22-4d55-a7de-651dd7de89fd-kube-api-access-m2cwp\") on node \"crc\" DevicePath \"\"" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.365381 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1242668-df22-4d55-a7de-651dd7de89fd" (UID: "b1242668-df22-4d55-a7de-651dd7de89fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.427117 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1242668-df22-4d55-a7de-651dd7de89fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.820309 4865 generic.go:334] "Generic (PLEG): container finished" podID="b1242668-df22-4d55-a7de-651dd7de89fd" containerID="b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90" exitCode=0 Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.821082 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nc4rf" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.823725 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc4rf" event={"ID":"b1242668-df22-4d55-a7de-651dd7de89fd","Type":"ContainerDied","Data":"b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90"} Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.823778 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nc4rf" event={"ID":"b1242668-df22-4d55-a7de-651dd7de89fd","Type":"ContainerDied","Data":"cce771c3ff2e5d2b3a5dff12df367da5077253344329b77ea60b17ee43ec0c98"} Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.823800 4865 scope.go:117] "RemoveContainer" containerID="b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.862186 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nc4rf"] Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.862814 4865 scope.go:117] "RemoveContainer" containerID="4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.870996 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nc4rf"] Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.883008 4865 scope.go:117] "RemoveContainer" containerID="cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.932855 4865 scope.go:117] "RemoveContainer" containerID="b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90" Jan 23 12:31:08 crc kubenswrapper[4865]: E0123 12:31:08.933287 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90\": container with ID starting with b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90 not found: ID does not exist" containerID="b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.933318 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90"} err="failed to get container status \"b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90\": rpc error: code = NotFound desc = could not find container \"b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90\": container with ID starting with b8fd7be7d9184f5ae3edaeb40baf71fdc28ac7d9122ccdf58274b1af8d7c8b90 not found: ID does not exist" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.933339 4865 scope.go:117] "RemoveContainer" containerID="4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9" Jan 23 12:31:08 crc kubenswrapper[4865]: E0123 12:31:08.933609 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9\": container with ID starting with 4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9 not found: ID does not exist" containerID="4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.933627 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9"} err="failed to get container status \"4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9\": rpc error: code = NotFound desc = could not find container \"4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9\": container with ID starting with 4549df5b08bfb42d19c5af5c925cf6e5362baa1cb475142189b47b38299107e9 not found: ID does not exist" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.933639 4865 scope.go:117] "RemoveContainer" containerID="cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c" Jan 23 12:31:08 crc kubenswrapper[4865]: E0123 12:31:08.933804 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c\": container with ID starting with cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c not found: ID does not exist" containerID="cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c" Jan 23 12:31:08 crc kubenswrapper[4865]: I0123 12:31:08.933822 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c"} err="failed to get container status \"cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c\": rpc error: code = NotFound desc = could not find container \"cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c\": container with ID starting with cd6ea248acc6d9232d0ab22993ba896fe05d211b780ab60b6d80f67d8c76586c not found: ID does not exist" Jan 23 12:31:10 crc kubenswrapper[4865]: I0123 12:31:10.128493 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" path="/var/lib/kubelet/pods/b1242668-df22-4d55-a7de-651dd7de89fd/volumes" Jan 23 12:31:48 crc kubenswrapper[4865]: I0123 12:31:48.776703 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:31:48 crc kubenswrapper[4865]: I0123 12:31:48.777154 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:32:06 crc kubenswrapper[4865]: I0123 12:32:06.360257 4865 generic.go:334] "Generic (PLEG): container finished" podID="84909230-47a6-4048-9d60-f80aa8a987aa" containerID="a2a030add419a4da82929cb9b31a811e7bca6d1cf1c236f686c2e9bba44304ce" exitCode=0 Jan 23 12:32:06 crc kubenswrapper[4865]: I0123 12:32:06.360335 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" event={"ID":"84909230-47a6-4048-9d60-f80aa8a987aa","Type":"ContainerDied","Data":"a2a030add419a4da82929cb9b31a811e7bca6d1cf1c236f686c2e9bba44304ce"} Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.799344 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.947332 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-ssh-key-openstack-edpm-ipam\") pod \"84909230-47a6-4048-9d60-f80aa8a987aa\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.947513 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-ovn-metadata-agent-neutron-config-0\") pod \"84909230-47a6-4048-9d60-f80aa8a987aa\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.947622 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ds96r\" (UniqueName: \"kubernetes.io/projected/84909230-47a6-4048-9d60-f80aa8a987aa-kube-api-access-ds96r\") pod \"84909230-47a6-4048-9d60-f80aa8a987aa\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.947668 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-metadata-combined-ca-bundle\") pod \"84909230-47a6-4048-9d60-f80aa8a987aa\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.947758 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-inventory\") pod \"84909230-47a6-4048-9d60-f80aa8a987aa\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.947794 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-nova-metadata-neutron-config-0\") pod \"84909230-47a6-4048-9d60-f80aa8a987aa\" (UID: \"84909230-47a6-4048-9d60-f80aa8a987aa\") " Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.960878 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "84909230-47a6-4048-9d60-f80aa8a987aa" (UID: "84909230-47a6-4048-9d60-f80aa8a987aa"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.967746 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84909230-47a6-4048-9d60-f80aa8a987aa-kube-api-access-ds96r" (OuterVolumeSpecName: "kube-api-access-ds96r") pod "84909230-47a6-4048-9d60-f80aa8a987aa" (UID: "84909230-47a6-4048-9d60-f80aa8a987aa"). InnerVolumeSpecName "kube-api-access-ds96r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.989182 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-inventory" (OuterVolumeSpecName: "inventory") pod "84909230-47a6-4048-9d60-f80aa8a987aa" (UID: "84909230-47a6-4048-9d60-f80aa8a987aa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.992219 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "84909230-47a6-4048-9d60-f80aa8a987aa" (UID: "84909230-47a6-4048-9d60-f80aa8a987aa"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:32:07 crc kubenswrapper[4865]: I0123 12:32:07.995623 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "84909230-47a6-4048-9d60-f80aa8a987aa" (UID: "84909230-47a6-4048-9d60-f80aa8a987aa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.020259 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "84909230-47a6-4048-9d60-f80aa8a987aa" (UID: "84909230-47a6-4048-9d60-f80aa8a987aa"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.050634 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.050678 4865 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.050695 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.050708 4865 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.050721 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ds96r\" (UniqueName: \"kubernetes.io/projected/84909230-47a6-4048-9d60-f80aa8a987aa-kube-api-access-ds96r\") on node \"crc\" DevicePath \"\"" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.050733 4865 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84909230-47a6-4048-9d60-f80aa8a987aa-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.380871 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" event={"ID":"84909230-47a6-4048-9d60-f80aa8a987aa","Type":"ContainerDied","Data":"c200d8d71e4d816f9d3eac55133eac3c831f1c2692571f0f970167422ebb16c8"} Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.381145 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c200d8d71e4d816f9d3eac55133eac3c831f1c2692571f0f970167422ebb16c8" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.380958 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-krwb4" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.463546 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx"] Jan 23 12:32:08 crc kubenswrapper[4865]: E0123 12:32:08.463918 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" containerName="extract-content" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.463934 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" containerName="extract-content" Jan 23 12:32:08 crc kubenswrapper[4865]: E0123 12:32:08.463954 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84909230-47a6-4048-9d60-f80aa8a987aa" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.463965 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="84909230-47a6-4048-9d60-f80aa8a987aa" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 12:32:08 crc kubenswrapper[4865]: E0123 12:32:08.463989 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" containerName="extract-utilities" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.463995 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" containerName="extract-utilities" Jan 23 12:32:08 crc kubenswrapper[4865]: E0123 12:32:08.464011 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" containerName="registry-server" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.464017 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" containerName="registry-server" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.464183 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="84909230-47a6-4048-9d60-f80aa8a987aa" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.464199 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1242668-df22-4d55-a7de-651dd7de89fd" containerName="registry-server" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.464804 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.467555 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.467570 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.467771 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.469578 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.469831 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.485931 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx"] Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.558020 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62k67\" (UniqueName: \"kubernetes.io/projected/85724ac5-933c-4c4f-896c-ee4db6add16d-kube-api-access-62k67\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.558133 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.558250 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.558305 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.558339 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.659943 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62k67\" (UniqueName: \"kubernetes.io/projected/85724ac5-933c-4c4f-896c-ee4db6add16d-kube-api-access-62k67\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.660296 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.660511 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.661090 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.661260 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.664155 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.664512 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.664806 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.665555 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.678331 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62k67\" (UniqueName: \"kubernetes.io/projected/85724ac5-933c-4c4f-896c-ee4db6add16d-kube-api-access-62k67\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:08 crc kubenswrapper[4865]: I0123 12:32:08.778896 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:32:09 crc kubenswrapper[4865]: I0123 12:32:09.312696 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx"] Jan 23 12:32:09 crc kubenswrapper[4865]: I0123 12:32:09.389510 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" event={"ID":"85724ac5-933c-4c4f-896c-ee4db6add16d","Type":"ContainerStarted","Data":"dbb63a69fed1715219287965caaeebcf62b73a22064990bbf870e6a83a265d48"} Jan 23 12:32:10 crc kubenswrapper[4865]: I0123 12:32:10.416795 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" event={"ID":"85724ac5-933c-4c4f-896c-ee4db6add16d","Type":"ContainerStarted","Data":"4dcd5892df9ad6794bc0f737398a704cf2f0a265b51962d5f5b0bba54bbd7b8d"} Jan 23 12:32:18 crc kubenswrapper[4865]: I0123 12:32:18.776987 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:32:18 crc kubenswrapper[4865]: I0123 12:32:18.777646 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:32:48 crc kubenswrapper[4865]: I0123 12:32:48.776717 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:32:48 crc kubenswrapper[4865]: I0123 12:32:48.777353 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:32:48 crc kubenswrapper[4865]: I0123 12:32:48.777403 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:32:48 crc kubenswrapper[4865]: I0123 12:32:48.778202 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:32:48 crc kubenswrapper[4865]: I0123 12:32:48.778266 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" gracePeriod=600 Jan 23 12:32:48 crc kubenswrapper[4865]: E0123 12:32:48.902934 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:32:49 crc kubenswrapper[4865]: I0123 12:32:49.739924 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" exitCode=0 Jan 23 12:32:49 crc kubenswrapper[4865]: I0123 12:32:49.739971 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06"} Jan 23 12:32:49 crc kubenswrapper[4865]: I0123 12:32:49.740034 4865 scope.go:117] "RemoveContainer" containerID="9e0926f65a291664cd50747401caf9b0e97c344dae0b4a2a81e701f0c1468f90" Jan 23 12:32:49 crc kubenswrapper[4865]: I0123 12:32:49.740721 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:32:49 crc kubenswrapper[4865]: E0123 12:32:49.741056 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:32:49 crc kubenswrapper[4865]: I0123 12:32:49.758830 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" podStartSLOduration=41.284371102 podStartE2EDuration="41.758804419s" podCreationTimestamp="2026-01-23 12:32:08 +0000 UTC" firstStartedPulling="2026-01-23 12:32:09.316063035 +0000 UTC m=+2373.485135271" lastFinishedPulling="2026-01-23 12:32:09.790496362 +0000 UTC m=+2373.959568588" observedRunningTime="2026-01-23 12:32:10.439074625 +0000 UTC m=+2374.608146861" watchObservedRunningTime="2026-01-23 12:32:49.758804419 +0000 UTC m=+2413.927876645" Jan 23 12:33:05 crc kubenswrapper[4865]: I0123 12:33:05.118306 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:33:05 crc kubenswrapper[4865]: E0123 12:33:05.119382 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:33:19 crc kubenswrapper[4865]: I0123 12:33:19.119738 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:33:19 crc kubenswrapper[4865]: E0123 12:33:19.120373 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:33:34 crc kubenswrapper[4865]: I0123 12:33:34.118090 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:33:34 crc kubenswrapper[4865]: E0123 12:33:34.118759 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:33:49 crc kubenswrapper[4865]: I0123 12:33:49.117911 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:33:49 crc kubenswrapper[4865]: E0123 12:33:49.119726 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:34:03 crc kubenswrapper[4865]: I0123 12:34:03.119919 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:34:03 crc kubenswrapper[4865]: E0123 12:34:03.120633 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:34:17 crc kubenswrapper[4865]: I0123 12:34:17.118770 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:34:17 crc kubenswrapper[4865]: E0123 12:34:17.119823 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:34:29 crc kubenswrapper[4865]: I0123 12:34:29.118772 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:34:29 crc kubenswrapper[4865]: E0123 12:34:29.119789 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:34:42 crc kubenswrapper[4865]: I0123 12:34:42.119003 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:34:42 crc kubenswrapper[4865]: E0123 12:34:42.120156 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:34:53 crc kubenswrapper[4865]: I0123 12:34:53.118946 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:34:53 crc kubenswrapper[4865]: E0123 12:34:53.119785 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:35:05 crc kubenswrapper[4865]: I0123 12:35:05.117517 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:35:05 crc kubenswrapper[4865]: E0123 12:35:05.118380 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:35:17 crc kubenswrapper[4865]: I0123 12:35:17.118635 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:35:17 crc kubenswrapper[4865]: E0123 12:35:17.119440 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:35:31 crc kubenswrapper[4865]: I0123 12:35:31.117780 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:35:31 crc kubenswrapper[4865]: E0123 12:35:31.118477 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:35:46 crc kubenswrapper[4865]: I0123 12:35:46.124374 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:35:46 crc kubenswrapper[4865]: E0123 12:35:46.125379 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:36:00 crc kubenswrapper[4865]: I0123 12:36:00.142327 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:36:00 crc kubenswrapper[4865]: E0123 12:36:00.143245 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:36:14 crc kubenswrapper[4865]: I0123 12:36:14.119047 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:36:14 crc kubenswrapper[4865]: E0123 12:36:14.120281 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:36:25 crc kubenswrapper[4865]: I0123 12:36:25.123964 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:36:25 crc kubenswrapper[4865]: E0123 12:36:25.128584 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:36:38 crc kubenswrapper[4865]: I0123 12:36:38.118409 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:36:38 crc kubenswrapper[4865]: E0123 12:36:38.119155 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:36:47 crc kubenswrapper[4865]: I0123 12:36:47.893719 4865 generic.go:334] "Generic (PLEG): container finished" podID="85724ac5-933c-4c4f-896c-ee4db6add16d" containerID="4dcd5892df9ad6794bc0f737398a704cf2f0a265b51962d5f5b0bba54bbd7b8d" exitCode=0 Jan 23 12:36:47 crc kubenswrapper[4865]: I0123 12:36:47.893819 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" event={"ID":"85724ac5-933c-4c4f-896c-ee4db6add16d","Type":"ContainerDied","Data":"4dcd5892df9ad6794bc0f737398a704cf2f0a265b51962d5f5b0bba54bbd7b8d"} Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.326955 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.378810 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-combined-ca-bundle\") pod \"85724ac5-933c-4c4f-896c-ee4db6add16d\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.378887 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-ssh-key-openstack-edpm-ipam\") pod \"85724ac5-933c-4c4f-896c-ee4db6add16d\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.378918 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-secret-0\") pod \"85724ac5-933c-4c4f-896c-ee4db6add16d\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.379002 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-inventory\") pod \"85724ac5-933c-4c4f-896c-ee4db6add16d\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.379140 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62k67\" (UniqueName: \"kubernetes.io/projected/85724ac5-933c-4c4f-896c-ee4db6add16d-kube-api-access-62k67\") pod \"85724ac5-933c-4c4f-896c-ee4db6add16d\" (UID: \"85724ac5-933c-4c4f-896c-ee4db6add16d\") " Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.385823 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85724ac5-933c-4c4f-896c-ee4db6add16d-kube-api-access-62k67" (OuterVolumeSpecName: "kube-api-access-62k67") pod "85724ac5-933c-4c4f-896c-ee4db6add16d" (UID: "85724ac5-933c-4c4f-896c-ee4db6add16d"). InnerVolumeSpecName "kube-api-access-62k67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.386066 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "85724ac5-933c-4c4f-896c-ee4db6add16d" (UID: "85724ac5-933c-4c4f-896c-ee4db6add16d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.408131 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "85724ac5-933c-4c4f-896c-ee4db6add16d" (UID: "85724ac5-933c-4c4f-896c-ee4db6add16d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.415227 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "85724ac5-933c-4c4f-896c-ee4db6add16d" (UID: "85724ac5-933c-4c4f-896c-ee4db6add16d"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.417032 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-inventory" (OuterVolumeSpecName: "inventory") pod "85724ac5-933c-4c4f-896c-ee4db6add16d" (UID: "85724ac5-933c-4c4f-896c-ee4db6add16d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.481135 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.481173 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62k67\" (UniqueName: \"kubernetes.io/projected/85724ac5-933c-4c4f-896c-ee4db6add16d-kube-api-access-62k67\") on node \"crc\" DevicePath \"\"" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.481191 4865 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.481202 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.481216 4865 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85724ac5-933c-4c4f-896c-ee4db6add16d-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.915476 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" event={"ID":"85724ac5-933c-4c4f-896c-ee4db6add16d","Type":"ContainerDied","Data":"dbb63a69fed1715219287965caaeebcf62b73a22064990bbf870e6a83a265d48"} Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.915522 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbb63a69fed1715219287965caaeebcf62b73a22064990bbf870e6a83a265d48" Jan 23 12:36:49 crc kubenswrapper[4865]: I0123 12:36:49.917334 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-mfwkx" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.028306 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4"] Jan 23 12:36:50 crc kubenswrapper[4865]: E0123 12:36:50.028699 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85724ac5-933c-4c4f-896c-ee4db6add16d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.028716 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="85724ac5-933c-4c4f-896c-ee4db6add16d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.028893 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="85724ac5-933c-4c4f-896c-ee4db6add16d" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.029480 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.034617 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.035550 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.035681 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.035751 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.035820 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.035937 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.036040 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.091468 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4"] Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.092590 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.092657 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.092720 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.092764 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.092819 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgdh5\" (UniqueName: \"kubernetes.io/projected/d4c5d63c-14cb-4276-8f31-7853fec43ace-kube-api-access-fgdh5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.092845 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.092874 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.092940 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.092968 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.194066 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.194130 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgdh5\" (UniqueName: \"kubernetes.io/projected/d4c5d63c-14cb-4276-8f31-7853fec43ace-kube-api-access-fgdh5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.194150 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.194182 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.194287 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.195194 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.195574 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.195621 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.195658 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.204409 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.212526 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.216144 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.217083 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.218045 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.220522 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.220966 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.225287 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.232953 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgdh5\" (UniqueName: \"kubernetes.io/projected/d4c5d63c-14cb-4276-8f31-7853fec43ace-kube-api-access-fgdh5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tzsl4\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.354673 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.934184 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4"] Jan 23 12:36:50 crc kubenswrapper[4865]: I0123 12:36:50.942253 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 12:36:51 crc kubenswrapper[4865]: I0123 12:36:51.931378 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" event={"ID":"d4c5d63c-14cb-4276-8f31-7853fec43ace","Type":"ContainerStarted","Data":"9a2576431795d34e9b63af2ccb2654ec2e92e4e62b882048f4708198f20639d0"} Jan 23 12:36:51 crc kubenswrapper[4865]: I0123 12:36:51.931732 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" event={"ID":"d4c5d63c-14cb-4276-8f31-7853fec43ace","Type":"ContainerStarted","Data":"10c19b98ef608927ddb9ff6ceac9c6d66f1046b749887dfe107ddd37201c531e"} Jan 23 12:36:51 crc kubenswrapper[4865]: I0123 12:36:51.952789 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" podStartSLOduration=1.460044479 podStartE2EDuration="1.952765263s" podCreationTimestamp="2026-01-23 12:36:50 +0000 UTC" firstStartedPulling="2026-01-23 12:36:50.941996638 +0000 UTC m=+2655.111068864" lastFinishedPulling="2026-01-23 12:36:51.434717422 +0000 UTC m=+2655.603789648" observedRunningTime="2026-01-23 12:36:51.945235839 +0000 UTC m=+2656.114308095" watchObservedRunningTime="2026-01-23 12:36:51.952765263 +0000 UTC m=+2656.121837489" Jan 23 12:36:52 crc kubenswrapper[4865]: I0123 12:36:52.118184 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:36:52 crc kubenswrapper[4865]: E0123 12:36:52.118427 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:37:04 crc kubenswrapper[4865]: I0123 12:37:04.118345 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:37:04 crc kubenswrapper[4865]: E0123 12:37:04.120947 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:37:17 crc kubenswrapper[4865]: I0123 12:37:17.118137 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:37:17 crc kubenswrapper[4865]: E0123 12:37:17.118715 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:37:28 crc kubenswrapper[4865]: I0123 12:37:28.118159 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:37:28 crc kubenswrapper[4865]: E0123 12:37:28.118876 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:37:41 crc kubenswrapper[4865]: I0123 12:37:41.119131 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:37:41 crc kubenswrapper[4865]: E0123 12:37:41.120093 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.089044 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xjzbh"] Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.091553 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.110281 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjzbh"] Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.240134 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-utilities\") pod \"certified-operators-xjzbh\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.240248 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbc7r\" (UniqueName: \"kubernetes.io/projected/1b4b5267-459e-4fa9-8957-ffe8f970fca8-kube-api-access-cbc7r\") pod \"certified-operators-xjzbh\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.240269 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-catalog-content\") pod \"certified-operators-xjzbh\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.342966 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbc7r\" (UniqueName: \"kubernetes.io/projected/1b4b5267-459e-4fa9-8957-ffe8f970fca8-kube-api-access-cbc7r\") pod \"certified-operators-xjzbh\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.343031 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-catalog-content\") pod \"certified-operators-xjzbh\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.343227 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-utilities\") pod \"certified-operators-xjzbh\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.344065 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-catalog-content\") pod \"certified-operators-xjzbh\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.344244 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-utilities\") pod \"certified-operators-xjzbh\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.367151 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbc7r\" (UniqueName: \"kubernetes.io/projected/1b4b5267-459e-4fa9-8957-ffe8f970fca8-kube-api-access-cbc7r\") pod \"certified-operators-xjzbh\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.422639 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:44 crc kubenswrapper[4865]: I0123 12:37:44.902365 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjzbh"] Jan 23 12:37:45 crc kubenswrapper[4865]: I0123 12:37:45.395478 4865 generic.go:334] "Generic (PLEG): container finished" podID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerID="105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608" exitCode=0 Jan 23 12:37:45 crc kubenswrapper[4865]: I0123 12:37:45.395772 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzbh" event={"ID":"1b4b5267-459e-4fa9-8957-ffe8f970fca8","Type":"ContainerDied","Data":"105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608"} Jan 23 12:37:45 crc kubenswrapper[4865]: I0123 12:37:45.395803 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzbh" event={"ID":"1b4b5267-459e-4fa9-8957-ffe8f970fca8","Type":"ContainerStarted","Data":"60f5c680ddff4b8e482bbe57ff06cecfda55286d5a67d8809ba37372256d271d"} Jan 23 12:37:46 crc kubenswrapper[4865]: I0123 12:37:46.409011 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzbh" event={"ID":"1b4b5267-459e-4fa9-8957-ffe8f970fca8","Type":"ContainerStarted","Data":"6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553"} Jan 23 12:37:47 crc kubenswrapper[4865]: I0123 12:37:47.418979 4865 generic.go:334] "Generic (PLEG): container finished" podID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerID="6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553" exitCode=0 Jan 23 12:37:47 crc kubenswrapper[4865]: I0123 12:37:47.419057 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzbh" event={"ID":"1b4b5267-459e-4fa9-8957-ffe8f970fca8","Type":"ContainerDied","Data":"6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553"} Jan 23 12:37:48 crc kubenswrapper[4865]: I0123 12:37:48.428590 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzbh" event={"ID":"1b4b5267-459e-4fa9-8957-ffe8f970fca8","Type":"ContainerStarted","Data":"3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094"} Jan 23 12:37:54 crc kubenswrapper[4865]: I0123 12:37:54.423546 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:54 crc kubenswrapper[4865]: I0123 12:37:54.436091 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:54 crc kubenswrapper[4865]: I0123 12:37:54.491555 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:54 crc kubenswrapper[4865]: I0123 12:37:54.512475 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xjzbh" podStartSLOduration=8.084603028 podStartE2EDuration="10.512456521s" podCreationTimestamp="2026-01-23 12:37:44 +0000 UTC" firstStartedPulling="2026-01-23 12:37:45.397804396 +0000 UTC m=+2709.566876622" lastFinishedPulling="2026-01-23 12:37:47.825657889 +0000 UTC m=+2711.994730115" observedRunningTime="2026-01-23 12:37:48.448168239 +0000 UTC m=+2712.617240475" watchObservedRunningTime="2026-01-23 12:37:54.512456521 +0000 UTC m=+2718.681528747" Jan 23 12:37:55 crc kubenswrapper[4865]: I0123 12:37:55.117972 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:37:55 crc kubenswrapper[4865]: I0123 12:37:55.491172 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"ea4df3191f9b072e2daa3f3b9785980c394ff89c94c943c5e65a746cd902c595"} Jan 23 12:37:55 crc kubenswrapper[4865]: I0123 12:37:55.572757 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:55 crc kubenswrapper[4865]: I0123 12:37:55.641662 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xjzbh"] Jan 23 12:37:57 crc kubenswrapper[4865]: I0123 12:37:57.507017 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xjzbh" podUID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerName="registry-server" containerID="cri-o://3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094" gracePeriod=2 Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.037207 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.167722 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbc7r\" (UniqueName: \"kubernetes.io/projected/1b4b5267-459e-4fa9-8957-ffe8f970fca8-kube-api-access-cbc7r\") pod \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.168292 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-catalog-content\") pod \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.168342 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-utilities\") pod \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\" (UID: \"1b4b5267-459e-4fa9-8957-ffe8f970fca8\") " Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.169464 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-utilities" (OuterVolumeSpecName: "utilities") pod "1b4b5267-459e-4fa9-8957-ffe8f970fca8" (UID: "1b4b5267-459e-4fa9-8957-ffe8f970fca8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.170357 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.183946 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b4b5267-459e-4fa9-8957-ffe8f970fca8-kube-api-access-cbc7r" (OuterVolumeSpecName: "kube-api-access-cbc7r") pod "1b4b5267-459e-4fa9-8957-ffe8f970fca8" (UID: "1b4b5267-459e-4fa9-8957-ffe8f970fca8"). InnerVolumeSpecName "kube-api-access-cbc7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.217194 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b4b5267-459e-4fa9-8957-ffe8f970fca8" (UID: "1b4b5267-459e-4fa9-8957-ffe8f970fca8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.272447 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b4b5267-459e-4fa9-8957-ffe8f970fca8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.272483 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbc7r\" (UniqueName: \"kubernetes.io/projected/1b4b5267-459e-4fa9-8957-ffe8f970fca8-kube-api-access-cbc7r\") on node \"crc\" DevicePath \"\"" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.516239 4865 generic.go:334] "Generic (PLEG): container finished" podID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerID="3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094" exitCode=0 Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.516278 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzbh" event={"ID":"1b4b5267-459e-4fa9-8957-ffe8f970fca8","Type":"ContainerDied","Data":"3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094"} Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.516298 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjzbh" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.516317 4865 scope.go:117] "RemoveContainer" containerID="3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.516305 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjzbh" event={"ID":"1b4b5267-459e-4fa9-8957-ffe8f970fca8","Type":"ContainerDied","Data":"60f5c680ddff4b8e482bbe57ff06cecfda55286d5a67d8809ba37372256d271d"} Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.535694 4865 scope.go:117] "RemoveContainer" containerID="6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.562161 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xjzbh"] Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.564397 4865 scope.go:117] "RemoveContainer" containerID="105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.570983 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xjzbh"] Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.609245 4865 scope.go:117] "RemoveContainer" containerID="3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094" Jan 23 12:37:58 crc kubenswrapper[4865]: E0123 12:37:58.609661 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094\": container with ID starting with 3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094 not found: ID does not exist" containerID="3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.609692 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094"} err="failed to get container status \"3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094\": rpc error: code = NotFound desc = could not find container \"3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094\": container with ID starting with 3ba560ac27fd356b5134591ff5e7537b64226c37d1a4704232dbddaf4629d094 not found: ID does not exist" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.609754 4865 scope.go:117] "RemoveContainer" containerID="6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553" Jan 23 12:37:58 crc kubenswrapper[4865]: E0123 12:37:58.610013 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553\": container with ID starting with 6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553 not found: ID does not exist" containerID="6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.610046 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553"} err="failed to get container status \"6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553\": rpc error: code = NotFound desc = could not find container \"6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553\": container with ID starting with 6f5b73f91aeadc145248c6db603acc54e3924dd0af04e8c2760870332d110553 not found: ID does not exist" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.610064 4865 scope.go:117] "RemoveContainer" containerID="105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608" Jan 23 12:37:58 crc kubenswrapper[4865]: E0123 12:37:58.610511 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608\": container with ID starting with 105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608 not found: ID does not exist" containerID="105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608" Jan 23 12:37:58 crc kubenswrapper[4865]: I0123 12:37:58.610543 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608"} err="failed to get container status \"105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608\": rpc error: code = NotFound desc = could not find container \"105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608\": container with ID starting with 105ff4326c0354f4aa7eee2d4ede7b2fc08f13ad15c665db9e55d340e4f7e608 not found: ID does not exist" Jan 23 12:38:00 crc kubenswrapper[4865]: I0123 12:38:00.128821 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" path="/var/lib/kubelet/pods/1b4b5267-459e-4fa9-8957-ffe8f970fca8/volumes" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.113352 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h2cj2"] Jan 23 12:38:11 crc kubenswrapper[4865]: E0123 12:38:11.114290 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerName="registry-server" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.114303 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerName="registry-server" Jan 23 12:38:11 crc kubenswrapper[4865]: E0123 12:38:11.114320 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerName="extract-utilities" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.114326 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerName="extract-utilities" Jan 23 12:38:11 crc kubenswrapper[4865]: E0123 12:38:11.114353 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerName="extract-content" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.114359 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerName="extract-content" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.114570 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b4b5267-459e-4fa9-8957-ffe8f970fca8" containerName="registry-server" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.115868 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.148181 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h2cj2"] Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.217338 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngk46\" (UniqueName: \"kubernetes.io/projected/82b9f60f-743b-4916-81b1-5cb45c1e01b7-kube-api-access-ngk46\") pod \"community-operators-h2cj2\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.217382 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-utilities\") pod \"community-operators-h2cj2\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.217668 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-catalog-content\") pod \"community-operators-h2cj2\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.319005 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-catalog-content\") pod \"community-operators-h2cj2\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.319082 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngk46\" (UniqueName: \"kubernetes.io/projected/82b9f60f-743b-4916-81b1-5cb45c1e01b7-kube-api-access-ngk46\") pod \"community-operators-h2cj2\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.319102 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-utilities\") pod \"community-operators-h2cj2\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.319519 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-catalog-content\") pod \"community-operators-h2cj2\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.319540 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-utilities\") pod \"community-operators-h2cj2\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.339021 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngk46\" (UniqueName: \"kubernetes.io/projected/82b9f60f-743b-4916-81b1-5cb45c1e01b7-kube-api-access-ngk46\") pod \"community-operators-h2cj2\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:11 crc kubenswrapper[4865]: I0123 12:38:11.485924 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:12 crc kubenswrapper[4865]: I0123 12:38:12.081300 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h2cj2"] Jan 23 12:38:12 crc kubenswrapper[4865]: I0123 12:38:12.630398 4865 generic.go:334] "Generic (PLEG): container finished" podID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerID="071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae" exitCode=0 Jan 23 12:38:12 crc kubenswrapper[4865]: I0123 12:38:12.630457 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2cj2" event={"ID":"82b9f60f-743b-4916-81b1-5cb45c1e01b7","Type":"ContainerDied","Data":"071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae"} Jan 23 12:38:12 crc kubenswrapper[4865]: I0123 12:38:12.630767 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2cj2" event={"ID":"82b9f60f-743b-4916-81b1-5cb45c1e01b7","Type":"ContainerStarted","Data":"302792bf2720ce1d96656cc5da861d510b227813b2ced564788fb5858768473d"} Jan 23 12:38:13 crc kubenswrapper[4865]: I0123 12:38:13.640111 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2cj2" event={"ID":"82b9f60f-743b-4916-81b1-5cb45c1e01b7","Type":"ContainerStarted","Data":"b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f"} Jan 23 12:38:15 crc kubenswrapper[4865]: I0123 12:38:15.660192 4865 generic.go:334] "Generic (PLEG): container finished" podID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerID="b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f" exitCode=0 Jan 23 12:38:15 crc kubenswrapper[4865]: I0123 12:38:15.660302 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2cj2" event={"ID":"82b9f60f-743b-4916-81b1-5cb45c1e01b7","Type":"ContainerDied","Data":"b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f"} Jan 23 12:38:17 crc kubenswrapper[4865]: I0123 12:38:17.678090 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2cj2" event={"ID":"82b9f60f-743b-4916-81b1-5cb45c1e01b7","Type":"ContainerStarted","Data":"bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2"} Jan 23 12:38:17 crc kubenswrapper[4865]: I0123 12:38:17.700762 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h2cj2" podStartSLOduration=2.611602132 podStartE2EDuration="6.700744902s" podCreationTimestamp="2026-01-23 12:38:11 +0000 UTC" firstStartedPulling="2026-01-23 12:38:12.633089988 +0000 UTC m=+2736.802162224" lastFinishedPulling="2026-01-23 12:38:16.722232768 +0000 UTC m=+2740.891304994" observedRunningTime="2026-01-23 12:38:17.693383663 +0000 UTC m=+2741.862455889" watchObservedRunningTime="2026-01-23 12:38:17.700744902 +0000 UTC m=+2741.869817128" Jan 23 12:38:21 crc kubenswrapper[4865]: I0123 12:38:21.486691 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:21 crc kubenswrapper[4865]: I0123 12:38:21.487055 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:21 crc kubenswrapper[4865]: I0123 12:38:21.538399 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:21 crc kubenswrapper[4865]: I0123 12:38:21.756505 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:21 crc kubenswrapper[4865]: I0123 12:38:21.812639 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h2cj2"] Jan 23 12:38:23 crc kubenswrapper[4865]: I0123 12:38:23.725764 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h2cj2" podUID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerName="registry-server" containerID="cri-o://bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2" gracePeriod=2 Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.260947 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.380159 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-utilities\") pod \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.380215 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngk46\" (UniqueName: \"kubernetes.io/projected/82b9f60f-743b-4916-81b1-5cb45c1e01b7-kube-api-access-ngk46\") pod \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.380246 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-catalog-content\") pod \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\" (UID: \"82b9f60f-743b-4916-81b1-5cb45c1e01b7\") " Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.381047 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-utilities" (OuterVolumeSpecName: "utilities") pod "82b9f60f-743b-4916-81b1-5cb45c1e01b7" (UID: "82b9f60f-743b-4916-81b1-5cb45c1e01b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.400053 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82b9f60f-743b-4916-81b1-5cb45c1e01b7-kube-api-access-ngk46" (OuterVolumeSpecName: "kube-api-access-ngk46") pod "82b9f60f-743b-4916-81b1-5cb45c1e01b7" (UID: "82b9f60f-743b-4916-81b1-5cb45c1e01b7"). InnerVolumeSpecName "kube-api-access-ngk46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.442144 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82b9f60f-743b-4916-81b1-5cb45c1e01b7" (UID: "82b9f60f-743b-4916-81b1-5cb45c1e01b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.482296 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngk46\" (UniqueName: \"kubernetes.io/projected/82b9f60f-743b-4916-81b1-5cb45c1e01b7-kube-api-access-ngk46\") on node \"crc\" DevicePath \"\"" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.482329 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.482338 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82b9f60f-743b-4916-81b1-5cb45c1e01b7-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.735716 4865 generic.go:334] "Generic (PLEG): container finished" podID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerID="bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2" exitCode=0 Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.735928 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2cj2" event={"ID":"82b9f60f-743b-4916-81b1-5cb45c1e01b7","Type":"ContainerDied","Data":"bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2"} Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.736949 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h2cj2" event={"ID":"82b9f60f-743b-4916-81b1-5cb45c1e01b7","Type":"ContainerDied","Data":"302792bf2720ce1d96656cc5da861d510b227813b2ced564788fb5858768473d"} Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.737047 4865 scope.go:117] "RemoveContainer" containerID="bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.735997 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h2cj2" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.775367 4865 scope.go:117] "RemoveContainer" containerID="b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.776162 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h2cj2"] Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.785850 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h2cj2"] Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.801135 4865 scope.go:117] "RemoveContainer" containerID="071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.861732 4865 scope.go:117] "RemoveContainer" containerID="bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2" Jan 23 12:38:24 crc kubenswrapper[4865]: E0123 12:38:24.862351 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2\": container with ID starting with bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2 not found: ID does not exist" containerID="bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.862391 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2"} err="failed to get container status \"bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2\": rpc error: code = NotFound desc = could not find container \"bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2\": container with ID starting with bd2f6b30f6b2e3b735ad67d0e4d718007d55a9ea7a2514940876049ae7550ff2 not found: ID does not exist" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.862419 4865 scope.go:117] "RemoveContainer" containerID="b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f" Jan 23 12:38:24 crc kubenswrapper[4865]: E0123 12:38:24.865814 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f\": container with ID starting with b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f not found: ID does not exist" containerID="b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.865852 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f"} err="failed to get container status \"b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f\": rpc error: code = NotFound desc = could not find container \"b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f\": container with ID starting with b781950d06fabbd9c81df7b9f189756872e0fd7f81f11d7b255fe04f597e357f not found: ID does not exist" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.865877 4865 scope.go:117] "RemoveContainer" containerID="071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae" Jan 23 12:38:24 crc kubenswrapper[4865]: E0123 12:38:24.866362 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae\": container with ID starting with 071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae not found: ID does not exist" containerID="071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae" Jan 23 12:38:24 crc kubenswrapper[4865]: I0123 12:38:24.866395 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae"} err="failed to get container status \"071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae\": rpc error: code = NotFound desc = could not find container \"071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae\": container with ID starting with 071122b3374e172bb35691609679babb43efff59943da09b7de2e290ec4679ae not found: ID does not exist" Jan 23 12:38:26 crc kubenswrapper[4865]: I0123 12:38:26.127371 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" path="/var/lib/kubelet/pods/82b9f60f-743b-4916-81b1-5cb45c1e01b7/volumes" Jan 23 12:39:35 crc kubenswrapper[4865]: I0123 12:39:35.369685 4865 generic.go:334] "Generic (PLEG): container finished" podID="d4c5d63c-14cb-4276-8f31-7853fec43ace" containerID="9a2576431795d34e9b63af2ccb2654ec2e92e4e62b882048f4708198f20639d0" exitCode=0 Jan 23 12:39:35 crc kubenswrapper[4865]: I0123 12:39:35.369803 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" event={"ID":"d4c5d63c-14cb-4276-8f31-7853fec43ace","Type":"ContainerDied","Data":"9a2576431795d34e9b63af2ccb2654ec2e92e4e62b882048f4708198f20639d0"} Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.825167 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.916834 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-1\") pod \"d4c5d63c-14cb-4276-8f31-7853fec43ace\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.916896 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-0\") pod \"d4c5d63c-14cb-4276-8f31-7853fec43ace\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.916933 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-0\") pod \"d4c5d63c-14cb-4276-8f31-7853fec43ace\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.916989 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-combined-ca-bundle\") pod \"d4c5d63c-14cb-4276-8f31-7853fec43ace\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.917024 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-1\") pod \"d4c5d63c-14cb-4276-8f31-7853fec43ace\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.917099 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-inventory\") pod \"d4c5d63c-14cb-4276-8f31-7853fec43ace\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.917160 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-extra-config-0\") pod \"d4c5d63c-14cb-4276-8f31-7853fec43ace\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.917222 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-ssh-key-openstack-edpm-ipam\") pod \"d4c5d63c-14cb-4276-8f31-7853fec43ace\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.917276 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgdh5\" (UniqueName: \"kubernetes.io/projected/d4c5d63c-14cb-4276-8f31-7853fec43ace-kube-api-access-fgdh5\") pod \"d4c5d63c-14cb-4276-8f31-7853fec43ace\" (UID: \"d4c5d63c-14cb-4276-8f31-7853fec43ace\") " Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.925133 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d4c5d63c-14cb-4276-8f31-7853fec43ace" (UID: "d4c5d63c-14cb-4276-8f31-7853fec43ace"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.943530 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4c5d63c-14cb-4276-8f31-7853fec43ace-kube-api-access-fgdh5" (OuterVolumeSpecName: "kube-api-access-fgdh5") pod "d4c5d63c-14cb-4276-8f31-7853fec43ace" (UID: "d4c5d63c-14cb-4276-8f31-7853fec43ace"). InnerVolumeSpecName "kube-api-access-fgdh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.952321 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-inventory" (OuterVolumeSpecName: "inventory") pod "d4c5d63c-14cb-4276-8f31-7853fec43ace" (UID: "d4c5d63c-14cb-4276-8f31-7853fec43ace"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.954779 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "d4c5d63c-14cb-4276-8f31-7853fec43ace" (UID: "d4c5d63c-14cb-4276-8f31-7853fec43ace"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.960299 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "d4c5d63c-14cb-4276-8f31-7853fec43ace" (UID: "d4c5d63c-14cb-4276-8f31-7853fec43ace"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.960913 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "d4c5d63c-14cb-4276-8f31-7853fec43ace" (UID: "d4c5d63c-14cb-4276-8f31-7853fec43ace"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.961334 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "d4c5d63c-14cb-4276-8f31-7853fec43ace" (UID: "d4c5d63c-14cb-4276-8f31-7853fec43ace"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.971148 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d4c5d63c-14cb-4276-8f31-7853fec43ace" (UID: "d4c5d63c-14cb-4276-8f31-7853fec43ace"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:39:36 crc kubenswrapper[4865]: I0123 12:39:36.972821 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "d4c5d63c-14cb-4276-8f31-7853fec43ace" (UID: "d4c5d63c-14cb-4276-8f31-7853fec43ace"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.019491 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.019655 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgdh5\" (UniqueName: \"kubernetes.io/projected/d4c5d63c-14cb-4276-8f31-7853fec43ace-kube-api-access-fgdh5\") on node \"crc\" DevicePath \"\"" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.019716 4865 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.019791 4865 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.019859 4865 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.019947 4865 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.020008 4865 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.020068 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4c5d63c-14cb-4276-8f31-7853fec43ace-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.020124 4865 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d4c5d63c-14cb-4276-8f31-7853fec43ace-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.391097 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" event={"ID":"d4c5d63c-14cb-4276-8f31-7853fec43ace","Type":"ContainerDied","Data":"10c19b98ef608927ddb9ff6ceac9c6d66f1046b749887dfe107ddd37201c531e"} Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.391446 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10c19b98ef608927ddb9ff6ceac9c6d66f1046b749887dfe107ddd37201c531e" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.391155 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tzsl4" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.482564 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7"] Jan 23 12:39:37 crc kubenswrapper[4865]: E0123 12:39:37.483790 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerName="registry-server" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.483812 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerName="registry-server" Jan 23 12:39:37 crc kubenswrapper[4865]: E0123 12:39:37.483831 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4c5d63c-14cb-4276-8f31-7853fec43ace" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.483839 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4c5d63c-14cb-4276-8f31-7853fec43ace" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 12:39:37 crc kubenswrapper[4865]: E0123 12:39:37.483859 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerName="extract-utilities" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.483865 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerName="extract-utilities" Jan 23 12:39:37 crc kubenswrapper[4865]: E0123 12:39:37.483880 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerName="extract-content" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.483886 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerName="extract-content" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.484048 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="82b9f60f-743b-4916-81b1-5cb45c1e01b7" containerName="registry-server" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.484073 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4c5d63c-14cb-4276-8f31-7853fec43ace" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.484649 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.487077 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-q5kgr" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.487177 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.487242 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.487077 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.487710 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.503513 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7"] Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.629509 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.629750 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.629793 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.629881 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.629921 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.630113 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.630142 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smkz5\" (UniqueName: \"kubernetes.io/projected/7f0fd562-a93d-4e58-8742-191dcc7dfeea-kube-api-access-smkz5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.731738 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.731803 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.731847 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.731873 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.731965 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.731989 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smkz5\" (UniqueName: \"kubernetes.io/projected/7f0fd562-a93d-4e58-8742-191dcc7dfeea-kube-api-access-smkz5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.732034 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.736887 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.737464 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.742043 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.742886 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.743751 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.746087 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.756909 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smkz5\" (UniqueName: \"kubernetes.io/projected/7f0fd562-a93d-4e58-8742-191dcc7dfeea-kube-api-access-smkz5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:37 crc kubenswrapper[4865]: I0123 12:39:37.800659 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:39:38 crc kubenswrapper[4865]: I0123 12:39:38.324126 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7"] Jan 23 12:39:38 crc kubenswrapper[4865]: I0123 12:39:38.400792 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" event={"ID":"7f0fd562-a93d-4e58-8742-191dcc7dfeea","Type":"ContainerStarted","Data":"6ffd357a1a0681048d2e019e645ea571246cb4167c59f153473a2b29595a6e2f"} Jan 23 12:39:39 crc kubenswrapper[4865]: I0123 12:39:39.414772 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" event={"ID":"7f0fd562-a93d-4e58-8742-191dcc7dfeea","Type":"ContainerStarted","Data":"97f85eac8cc9a7db0ecde66b02e7e17f18efc0ed61d02ae08310acbf2890b2e7"} Jan 23 12:39:39 crc kubenswrapper[4865]: I0123 12:39:39.442106 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" podStartSLOduration=2.026960359 podStartE2EDuration="2.442082773s" podCreationTimestamp="2026-01-23 12:39:37 +0000 UTC" firstStartedPulling="2026-01-23 12:39:38.320801408 +0000 UTC m=+2822.489873634" lastFinishedPulling="2026-01-23 12:39:38.735923822 +0000 UTC m=+2822.904996048" observedRunningTime="2026-01-23 12:39:39.43745864 +0000 UTC m=+2823.606530906" watchObservedRunningTime="2026-01-23 12:39:39.442082773 +0000 UTC m=+2823.611155039" Jan 23 12:40:18 crc kubenswrapper[4865]: I0123 12:40:18.775921 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:40:18 crc kubenswrapper[4865]: I0123 12:40:18.776466 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:40:39 crc kubenswrapper[4865]: I0123 12:40:39.978886 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v6mkn"] Jan 23 12:40:39 crc kubenswrapper[4865]: I0123 12:40:39.981939 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:39 crc kubenswrapper[4865]: I0123 12:40:39.988660 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6mkn"] Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.066370 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqtck\" (UniqueName: \"kubernetes.io/projected/302fc538-aa01-4724-aa9c-3d9511142231-kube-api-access-xqtck\") pod \"redhat-operators-v6mkn\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.066478 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-utilities\") pod \"redhat-operators-v6mkn\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.066532 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-catalog-content\") pod \"redhat-operators-v6mkn\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.168476 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-utilities\") pod \"redhat-operators-v6mkn\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.169031 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-utilities\") pod \"redhat-operators-v6mkn\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.169196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-catalog-content\") pod \"redhat-operators-v6mkn\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.169523 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-catalog-content\") pod \"redhat-operators-v6mkn\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.169751 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqtck\" (UniqueName: \"kubernetes.io/projected/302fc538-aa01-4724-aa9c-3d9511142231-kube-api-access-xqtck\") pod \"redhat-operators-v6mkn\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.197377 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqtck\" (UniqueName: \"kubernetes.io/projected/302fc538-aa01-4724-aa9c-3d9511142231-kube-api-access-xqtck\") pod \"redhat-operators-v6mkn\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.307234 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.812746 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6mkn"] Jan 23 12:40:40 crc kubenswrapper[4865]: I0123 12:40:40.968106 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6mkn" event={"ID":"302fc538-aa01-4724-aa9c-3d9511142231","Type":"ContainerStarted","Data":"2009fc5236f479703fdecb362f629a909fccf3f16a142de564970249aedd4138"} Jan 23 12:40:41 crc kubenswrapper[4865]: I0123 12:40:41.977652 4865 generic.go:334] "Generic (PLEG): container finished" podID="302fc538-aa01-4724-aa9c-3d9511142231" containerID="e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99" exitCode=0 Jan 23 12:40:41 crc kubenswrapper[4865]: I0123 12:40:41.977696 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6mkn" event={"ID":"302fc538-aa01-4724-aa9c-3d9511142231","Type":"ContainerDied","Data":"e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99"} Jan 23 12:40:44 crc kubenswrapper[4865]: I0123 12:40:44.003198 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6mkn" event={"ID":"302fc538-aa01-4724-aa9c-3d9511142231","Type":"ContainerStarted","Data":"e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d"} Jan 23 12:40:47 crc kubenswrapper[4865]: I0123 12:40:47.031687 4865 generic.go:334] "Generic (PLEG): container finished" podID="302fc538-aa01-4724-aa9c-3d9511142231" containerID="e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d" exitCode=0 Jan 23 12:40:47 crc kubenswrapper[4865]: I0123 12:40:47.031720 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6mkn" event={"ID":"302fc538-aa01-4724-aa9c-3d9511142231","Type":"ContainerDied","Data":"e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d"} Jan 23 12:40:48 crc kubenswrapper[4865]: I0123 12:40:48.041977 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6mkn" event={"ID":"302fc538-aa01-4724-aa9c-3d9511142231","Type":"ContainerStarted","Data":"5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646"} Jan 23 12:40:48 crc kubenswrapper[4865]: I0123 12:40:48.066757 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v6mkn" podStartSLOduration=3.496362012 podStartE2EDuration="9.06671805s" podCreationTimestamp="2026-01-23 12:40:39 +0000 UTC" firstStartedPulling="2026-01-23 12:40:41.979391532 +0000 UTC m=+2886.148463748" lastFinishedPulling="2026-01-23 12:40:47.54974756 +0000 UTC m=+2891.718819786" observedRunningTime="2026-01-23 12:40:48.058941851 +0000 UTC m=+2892.228014077" watchObservedRunningTime="2026-01-23 12:40:48.06671805 +0000 UTC m=+2892.235790276" Jan 23 12:40:48 crc kubenswrapper[4865]: I0123 12:40:48.776144 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:40:48 crc kubenswrapper[4865]: I0123 12:40:48.776232 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:40:50 crc kubenswrapper[4865]: I0123 12:40:50.307485 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:50 crc kubenswrapper[4865]: I0123 12:40:50.307874 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:40:51 crc kubenswrapper[4865]: I0123 12:40:51.385544 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6mkn" podUID="302fc538-aa01-4724-aa9c-3d9511142231" containerName="registry-server" probeResult="failure" output=< Jan 23 12:40:51 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:40:51 crc kubenswrapper[4865]: > Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.059095 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z9lg9"] Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.061518 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.076737 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9lg9"] Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.246792 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-utilities\") pod \"redhat-marketplace-z9lg9\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.247154 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsch\" (UniqueName: \"kubernetes.io/projected/b6b23ae1-254d-40ae-9254-1beda3230476-kube-api-access-6gsch\") pod \"redhat-marketplace-z9lg9\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.247263 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-catalog-content\") pod \"redhat-marketplace-z9lg9\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.349064 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-utilities\") pod \"redhat-marketplace-z9lg9\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.349150 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gsch\" (UniqueName: \"kubernetes.io/projected/b6b23ae1-254d-40ae-9254-1beda3230476-kube-api-access-6gsch\") pod \"redhat-marketplace-z9lg9\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.349229 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-catalog-content\") pod \"redhat-marketplace-z9lg9\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.349719 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-catalog-content\") pod \"redhat-marketplace-z9lg9\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.349722 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-utilities\") pod \"redhat-marketplace-z9lg9\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.375878 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gsch\" (UniqueName: \"kubernetes.io/projected/b6b23ae1-254d-40ae-9254-1beda3230476-kube-api-access-6gsch\") pod \"redhat-marketplace-z9lg9\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.386125 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:40:58 crc kubenswrapper[4865]: I0123 12:40:58.940000 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9lg9"] Jan 23 12:40:59 crc kubenswrapper[4865]: I0123 12:40:59.141466 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9lg9" event={"ID":"b6b23ae1-254d-40ae-9254-1beda3230476","Type":"ContainerStarted","Data":"9fa191fb16242be75ee92e1f56e349b95c595665e89e5057d327704eb1d62392"} Jan 23 12:41:00 crc kubenswrapper[4865]: I0123 12:41:00.212742 4865 generic.go:334] "Generic (PLEG): container finished" podID="b6b23ae1-254d-40ae-9254-1beda3230476" containerID="a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2" exitCode=0 Jan 23 12:41:00 crc kubenswrapper[4865]: I0123 12:41:00.213011 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9lg9" event={"ID":"b6b23ae1-254d-40ae-9254-1beda3230476","Type":"ContainerDied","Data":"a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2"} Jan 23 12:41:00 crc kubenswrapper[4865]: I0123 12:41:00.402095 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:41:00 crc kubenswrapper[4865]: I0123 12:41:00.500817 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:41:02 crc kubenswrapper[4865]: I0123 12:41:02.238250 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9lg9" event={"ID":"b6b23ae1-254d-40ae-9254-1beda3230476","Type":"ContainerStarted","Data":"7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad"} Jan 23 12:41:02 crc kubenswrapper[4865]: I0123 12:41:02.439425 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6mkn"] Jan 23 12:41:02 crc kubenswrapper[4865]: I0123 12:41:02.444057 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v6mkn" podUID="302fc538-aa01-4724-aa9c-3d9511142231" containerName="registry-server" containerID="cri-o://5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646" gracePeriod=2 Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.140098 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.248264 4865 generic.go:334] "Generic (PLEG): container finished" podID="b6b23ae1-254d-40ae-9254-1beda3230476" containerID="7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad" exitCode=0 Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.248409 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9lg9" event={"ID":"b6b23ae1-254d-40ae-9254-1beda3230476","Type":"ContainerDied","Data":"7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad"} Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.252216 4865 generic.go:334] "Generic (PLEG): container finished" podID="302fc538-aa01-4724-aa9c-3d9511142231" containerID="5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646" exitCode=0 Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.252255 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6mkn" event={"ID":"302fc538-aa01-4724-aa9c-3d9511142231","Type":"ContainerDied","Data":"5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646"} Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.252273 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6mkn" event={"ID":"302fc538-aa01-4724-aa9c-3d9511142231","Type":"ContainerDied","Data":"2009fc5236f479703fdecb362f629a909fccf3f16a142de564970249aedd4138"} Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.252289 4865 scope.go:117] "RemoveContainer" containerID="5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.252313 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6mkn" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.261023 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-catalog-content\") pod \"302fc538-aa01-4724-aa9c-3d9511142231\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.261208 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-utilities\") pod \"302fc538-aa01-4724-aa9c-3d9511142231\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.261470 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqtck\" (UniqueName: \"kubernetes.io/projected/302fc538-aa01-4724-aa9c-3d9511142231-kube-api-access-xqtck\") pod \"302fc538-aa01-4724-aa9c-3d9511142231\" (UID: \"302fc538-aa01-4724-aa9c-3d9511142231\") " Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.264313 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-utilities" (OuterVolumeSpecName: "utilities") pod "302fc538-aa01-4724-aa9c-3d9511142231" (UID: "302fc538-aa01-4724-aa9c-3d9511142231"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.283774 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302fc538-aa01-4724-aa9c-3d9511142231-kube-api-access-xqtck" (OuterVolumeSpecName: "kube-api-access-xqtck") pod "302fc538-aa01-4724-aa9c-3d9511142231" (UID: "302fc538-aa01-4724-aa9c-3d9511142231"). InnerVolumeSpecName "kube-api-access-xqtck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.290090 4865 scope.go:117] "RemoveContainer" containerID="e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.327579 4865 scope.go:117] "RemoveContainer" containerID="e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.364582 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqtck\" (UniqueName: \"kubernetes.io/projected/302fc538-aa01-4724-aa9c-3d9511142231-kube-api-access-xqtck\") on node \"crc\" DevicePath \"\"" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.364624 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.374282 4865 scope.go:117] "RemoveContainer" containerID="5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646" Jan 23 12:41:03 crc kubenswrapper[4865]: E0123 12:41:03.374847 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646\": container with ID starting with 5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646 not found: ID does not exist" containerID="5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.374885 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646"} err="failed to get container status \"5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646\": rpc error: code = NotFound desc = could not find container \"5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646\": container with ID starting with 5d35f90d215e411cef6066f9d97d565a799a20e68b874507d4ebdcef5b4de646 not found: ID does not exist" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.374915 4865 scope.go:117] "RemoveContainer" containerID="e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d" Jan 23 12:41:03 crc kubenswrapper[4865]: E0123 12:41:03.375277 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d\": container with ID starting with e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d not found: ID does not exist" containerID="e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.375324 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d"} err="failed to get container status \"e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d\": rpc error: code = NotFound desc = could not find container \"e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d\": container with ID starting with e2f75ffdb1d766c74a3a60b8d419f0fd31acaf44f65df94df3f83b9b265de65d not found: ID does not exist" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.375375 4865 scope.go:117] "RemoveContainer" containerID="e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99" Jan 23 12:41:03 crc kubenswrapper[4865]: E0123 12:41:03.375822 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99\": container with ID starting with e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99 not found: ID does not exist" containerID="e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.375848 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99"} err="failed to get container status \"e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99\": rpc error: code = NotFound desc = could not find container \"e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99\": container with ID starting with e7240e9a7bcb7b726f2bd793dbf7e6e5620385f3a903972fffe6a157689f1d99 not found: ID does not exist" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.436335 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "302fc538-aa01-4724-aa9c-3d9511142231" (UID: "302fc538-aa01-4724-aa9c-3d9511142231"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.465923 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/302fc538-aa01-4724-aa9c-3d9511142231-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.601068 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6mkn"] Jan 23 12:41:03 crc kubenswrapper[4865]: I0123 12:41:03.609342 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v6mkn"] Jan 23 12:41:04 crc kubenswrapper[4865]: I0123 12:41:04.127538 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="302fc538-aa01-4724-aa9c-3d9511142231" path="/var/lib/kubelet/pods/302fc538-aa01-4724-aa9c-3d9511142231/volumes" Jan 23 12:41:04 crc kubenswrapper[4865]: I0123 12:41:04.261298 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9lg9" event={"ID":"b6b23ae1-254d-40ae-9254-1beda3230476","Type":"ContainerStarted","Data":"d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a"} Jan 23 12:41:08 crc kubenswrapper[4865]: I0123 12:41:08.387397 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:41:08 crc kubenswrapper[4865]: I0123 12:41:08.388055 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:41:08 crc kubenswrapper[4865]: I0123 12:41:08.465897 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:41:08 crc kubenswrapper[4865]: I0123 12:41:08.485353 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z9lg9" podStartSLOduration=6.940078071 podStartE2EDuration="10.485337767s" podCreationTimestamp="2026-01-23 12:40:58 +0000 UTC" firstStartedPulling="2026-01-23 12:41:00.225905562 +0000 UTC m=+2904.394977788" lastFinishedPulling="2026-01-23 12:41:03.771165258 +0000 UTC m=+2907.940237484" observedRunningTime="2026-01-23 12:41:04.327123196 +0000 UTC m=+2908.496195422" watchObservedRunningTime="2026-01-23 12:41:08.485337767 +0000 UTC m=+2912.654410003" Jan 23 12:41:09 crc kubenswrapper[4865]: I0123 12:41:09.395851 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:41:09 crc kubenswrapper[4865]: I0123 12:41:09.631705 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9lg9"] Jan 23 12:41:11 crc kubenswrapper[4865]: I0123 12:41:11.320638 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z9lg9" podUID="b6b23ae1-254d-40ae-9254-1beda3230476" containerName="registry-server" containerID="cri-o://d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a" gracePeriod=2 Jan 23 12:41:11 crc kubenswrapper[4865]: I0123 12:41:11.808482 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:41:11 crc kubenswrapper[4865]: I0123 12:41:11.912165 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gsch\" (UniqueName: \"kubernetes.io/projected/b6b23ae1-254d-40ae-9254-1beda3230476-kube-api-access-6gsch\") pod \"b6b23ae1-254d-40ae-9254-1beda3230476\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " Jan 23 12:41:11 crc kubenswrapper[4865]: I0123 12:41:11.912266 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-utilities\") pod \"b6b23ae1-254d-40ae-9254-1beda3230476\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " Jan 23 12:41:11 crc kubenswrapper[4865]: I0123 12:41:11.912383 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-catalog-content\") pod \"b6b23ae1-254d-40ae-9254-1beda3230476\" (UID: \"b6b23ae1-254d-40ae-9254-1beda3230476\") " Jan 23 12:41:11 crc kubenswrapper[4865]: I0123 12:41:11.913791 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-utilities" (OuterVolumeSpecName: "utilities") pod "b6b23ae1-254d-40ae-9254-1beda3230476" (UID: "b6b23ae1-254d-40ae-9254-1beda3230476"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:41:11 crc kubenswrapper[4865]: I0123 12:41:11.925295 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6b23ae1-254d-40ae-9254-1beda3230476-kube-api-access-6gsch" (OuterVolumeSpecName: "kube-api-access-6gsch") pod "b6b23ae1-254d-40ae-9254-1beda3230476" (UID: "b6b23ae1-254d-40ae-9254-1beda3230476"). InnerVolumeSpecName "kube-api-access-6gsch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:41:11 crc kubenswrapper[4865]: I0123 12:41:11.941001 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6b23ae1-254d-40ae-9254-1beda3230476" (UID: "b6b23ae1-254d-40ae-9254-1beda3230476"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.015356 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gsch\" (UniqueName: \"kubernetes.io/projected/b6b23ae1-254d-40ae-9254-1beda3230476-kube-api-access-6gsch\") on node \"crc\" DevicePath \"\"" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.015401 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.015421 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6b23ae1-254d-40ae-9254-1beda3230476-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.331679 4865 generic.go:334] "Generic (PLEG): container finished" podID="b6b23ae1-254d-40ae-9254-1beda3230476" containerID="d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a" exitCode=0 Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.331719 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9lg9" event={"ID":"b6b23ae1-254d-40ae-9254-1beda3230476","Type":"ContainerDied","Data":"d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a"} Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.331732 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z9lg9" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.331746 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z9lg9" event={"ID":"b6b23ae1-254d-40ae-9254-1beda3230476","Type":"ContainerDied","Data":"9fa191fb16242be75ee92e1f56e349b95c595665e89e5057d327704eb1d62392"} Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.331763 4865 scope.go:117] "RemoveContainer" containerID="d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.356162 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9lg9"] Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.360110 4865 scope.go:117] "RemoveContainer" containerID="7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.364822 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z9lg9"] Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.376642 4865 scope.go:117] "RemoveContainer" containerID="a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.435884 4865 scope.go:117] "RemoveContainer" containerID="d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a" Jan 23 12:41:12 crc kubenswrapper[4865]: E0123 12:41:12.436591 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a\": container with ID starting with d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a not found: ID does not exist" containerID="d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.436714 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a"} err="failed to get container status \"d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a\": rpc error: code = NotFound desc = could not find container \"d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a\": container with ID starting with d45633505682a4722dae7259c3ad4623a1147a56613c76bff2e6e205e23abb5a not found: ID does not exist" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.436740 4865 scope.go:117] "RemoveContainer" containerID="7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad" Jan 23 12:41:12 crc kubenswrapper[4865]: E0123 12:41:12.437041 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad\": container with ID starting with 7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad not found: ID does not exist" containerID="7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.437082 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad"} err="failed to get container status \"7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad\": rpc error: code = NotFound desc = could not find container \"7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad\": container with ID starting with 7ad826e045988d00804e42b5e5daa8e4d1f140f0c2bc38f7e5511ea62f31d6ad not found: ID does not exist" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.437111 4865 scope.go:117] "RemoveContainer" containerID="a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2" Jan 23 12:41:12 crc kubenswrapper[4865]: E0123 12:41:12.438154 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2\": container with ID starting with a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2 not found: ID does not exist" containerID="a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2" Jan 23 12:41:12 crc kubenswrapper[4865]: I0123 12:41:12.438201 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2"} err="failed to get container status \"a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2\": rpc error: code = NotFound desc = could not find container \"a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2\": container with ID starting with a6c8de5f657cad7ad0f870dc4bcd6b06f24b6d333a34a60536bb6e1312ce4cd2 not found: ID does not exist" Jan 23 12:41:14 crc kubenswrapper[4865]: I0123 12:41:14.127720 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6b23ae1-254d-40ae-9254-1beda3230476" path="/var/lib/kubelet/pods/b6b23ae1-254d-40ae-9254-1beda3230476/volumes" Jan 23 12:41:18 crc kubenswrapper[4865]: I0123 12:41:18.777125 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:41:18 crc kubenswrapper[4865]: I0123 12:41:18.777506 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:41:18 crc kubenswrapper[4865]: I0123 12:41:18.777564 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:41:18 crc kubenswrapper[4865]: I0123 12:41:18.778439 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea4df3191f9b072e2daa3f3b9785980c394ff89c94c943c5e65a746cd902c595"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:41:18 crc kubenswrapper[4865]: I0123 12:41:18.778500 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://ea4df3191f9b072e2daa3f3b9785980c394ff89c94c943c5e65a746cd902c595" gracePeriod=600 Jan 23 12:41:19 crc kubenswrapper[4865]: I0123 12:41:19.447660 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="ea4df3191f9b072e2daa3f3b9785980c394ff89c94c943c5e65a746cd902c595" exitCode=0 Jan 23 12:41:19 crc kubenswrapper[4865]: I0123 12:41:19.447730 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"ea4df3191f9b072e2daa3f3b9785980c394ff89c94c943c5e65a746cd902c595"} Jan 23 12:41:19 crc kubenswrapper[4865]: I0123 12:41:19.448167 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8"} Jan 23 12:41:19 crc kubenswrapper[4865]: I0123 12:41:19.448185 4865 scope.go:117] "RemoveContainer" containerID="b03ef7a4a210b7096beb55ddd9a51eba7efc4e6f04267f9f1c0777581ea03c06" Jan 23 12:43:10 crc kubenswrapper[4865]: I0123 12:43:10.036790 4865 generic.go:334] "Generic (PLEG): container finished" podID="7f0fd562-a93d-4e58-8742-191dcc7dfeea" containerID="97f85eac8cc9a7db0ecde66b02e7e17f18efc0ed61d02ae08310acbf2890b2e7" exitCode=0 Jan 23 12:43:10 crc kubenswrapper[4865]: I0123 12:43:10.037004 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" event={"ID":"7f0fd562-a93d-4e58-8742-191dcc7dfeea","Type":"ContainerDied","Data":"97f85eac8cc9a7db0ecde66b02e7e17f18efc0ed61d02ae08310acbf2890b2e7"} Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.499715 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.631015 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smkz5\" (UniqueName: \"kubernetes.io/projected/7f0fd562-a93d-4e58-8742-191dcc7dfeea-kube-api-access-smkz5\") pod \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.631318 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-telemetry-combined-ca-bundle\") pod \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.631343 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ssh-key-openstack-edpm-ipam\") pod \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.631402 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-2\") pod \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.631513 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-inventory\") pod \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.631632 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-1\") pod \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.631657 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-0\") pod \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\" (UID: \"7f0fd562-a93d-4e58-8742-191dcc7dfeea\") " Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.644293 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0fd562-a93d-4e58-8742-191dcc7dfeea-kube-api-access-smkz5" (OuterVolumeSpecName: "kube-api-access-smkz5") pod "7f0fd562-a93d-4e58-8742-191dcc7dfeea" (UID: "7f0fd562-a93d-4e58-8742-191dcc7dfeea"). InnerVolumeSpecName "kube-api-access-smkz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.682790 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7f0fd562-a93d-4e58-8742-191dcc7dfeea" (UID: "7f0fd562-a93d-4e58-8742-191dcc7dfeea"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.747730 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smkz5\" (UniqueName: \"kubernetes.io/projected/7f0fd562-a93d-4e58-8742-191dcc7dfeea-kube-api-access-smkz5\") on node \"crc\" DevicePath \"\"" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.747764 4865 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.767755 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "7f0fd562-a93d-4e58-8742-191dcc7dfeea" (UID: "7f0fd562-a93d-4e58-8742-191dcc7dfeea"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.771118 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "7f0fd562-a93d-4e58-8742-191dcc7dfeea" (UID: "7f0fd562-a93d-4e58-8742-191dcc7dfeea"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.814801 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "7f0fd562-a93d-4e58-8742-191dcc7dfeea" (UID: "7f0fd562-a93d-4e58-8742-191dcc7dfeea"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.828803 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-inventory" (OuterVolumeSpecName: "inventory") pod "7f0fd562-a93d-4e58-8742-191dcc7dfeea" (UID: "7f0fd562-a93d-4e58-8742-191dcc7dfeea"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.836776 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f0fd562-a93d-4e58-8742-191dcc7dfeea" (UID: "7f0fd562-a93d-4e58-8742-191dcc7dfeea"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.853950 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.853977 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.854019 4865 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-inventory\") on node \"crc\" DevicePath \"\"" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.854030 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 23 12:43:11 crc kubenswrapper[4865]: I0123 12:43:11.854039 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/7f0fd562-a93d-4e58-8742-191dcc7dfeea-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 23 12:43:12 crc kubenswrapper[4865]: I0123 12:43:12.055570 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" event={"ID":"7f0fd562-a93d-4e58-8742-191dcc7dfeea","Type":"ContainerDied","Data":"6ffd357a1a0681048d2e019e645ea571246cb4167c59f153473a2b29595a6e2f"} Jan 23 12:43:12 crc kubenswrapper[4865]: I0123 12:43:12.055623 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ffd357a1a0681048d2e019e645ea571246cb4167c59f153473a2b29595a6e2f" Jan 23 12:43:12 crc kubenswrapper[4865]: I0123 12:43:12.055701 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-xw8l7" Jan 23 12:43:48 crc kubenswrapper[4865]: I0123 12:43:48.775998 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:43:48 crc kubenswrapper[4865]: I0123 12:43:48.776492 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.808355 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 23 12:44:17 crc kubenswrapper[4865]: E0123 12:44:17.809429 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302fc538-aa01-4724-aa9c-3d9511142231" containerName="extract-utilities" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.809446 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="302fc538-aa01-4724-aa9c-3d9511142231" containerName="extract-utilities" Jan 23 12:44:17 crc kubenswrapper[4865]: E0123 12:44:17.809469 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b23ae1-254d-40ae-9254-1beda3230476" containerName="extract-utilities" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.809477 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b23ae1-254d-40ae-9254-1beda3230476" containerName="extract-utilities" Jan 23 12:44:17 crc kubenswrapper[4865]: E0123 12:44:17.809494 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b23ae1-254d-40ae-9254-1beda3230476" containerName="registry-server" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.809502 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b23ae1-254d-40ae-9254-1beda3230476" containerName="registry-server" Jan 23 12:44:17 crc kubenswrapper[4865]: E0123 12:44:17.809517 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302fc538-aa01-4724-aa9c-3d9511142231" containerName="registry-server" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.809525 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="302fc538-aa01-4724-aa9c-3d9511142231" containerName="registry-server" Jan 23 12:44:17 crc kubenswrapper[4865]: E0123 12:44:17.809537 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0fd562-a93d-4e58-8742-191dcc7dfeea" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.809546 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0fd562-a93d-4e58-8742-191dcc7dfeea" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 12:44:17 crc kubenswrapper[4865]: E0123 12:44:17.809584 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302fc538-aa01-4724-aa9c-3d9511142231" containerName="extract-content" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.809592 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="302fc538-aa01-4724-aa9c-3d9511142231" containerName="extract-content" Jan 23 12:44:17 crc kubenswrapper[4865]: E0123 12:44:17.809624 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6b23ae1-254d-40ae-9254-1beda3230476" containerName="extract-content" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.809635 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6b23ae1-254d-40ae-9254-1beda3230476" containerName="extract-content" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.809887 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6b23ae1-254d-40ae-9254-1beda3230476" containerName="registry-server" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.809915 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f0fd562-a93d-4e58-8742-191dcc7dfeea" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.809932 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="302fc538-aa01-4724-aa9c-3d9511142231" containerName="registry-server" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.810744 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.813968 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.816418 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.816534 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rsw8g" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.817135 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.835674 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.946375 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.946527 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.946565 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nnnj\" (UniqueName: \"kubernetes.io/projected/6083e716-8bbf-40bf-abdd-87e865a2f7ae-kube-api-access-9nnnj\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.946618 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.946656 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.946694 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.946717 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.946760 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:17 crc kubenswrapper[4865]: I0123 12:44:17.946787 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.048644 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.048724 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.048783 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.048901 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.048933 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nnnj\" (UniqueName: \"kubernetes.io/projected/6083e716-8bbf-40bf-abdd-87e865a2f7ae-kube-api-access-9nnnj\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.048964 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.048996 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.049029 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.049050 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.049251 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.049727 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.050012 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.050267 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.051396 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.056181 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.058645 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.063965 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.065852 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nnnj\" (UniqueName: \"kubernetes.io/projected/6083e716-8bbf-40bf-abdd-87e865a2f7ae-kube-api-access-9nnnj\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.073102 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.136715 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.675896 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.685815 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.769150 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"6083e716-8bbf-40bf-abdd-87e865a2f7ae","Type":"ContainerStarted","Data":"887e44558db70fa1643ba6c8e9fc27aad4c10fc0af6af73d30fcc367208a65f7"} Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.776276 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:44:18 crc kubenswrapper[4865]: I0123 12:44:18.776384 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:44:48 crc kubenswrapper[4865]: I0123 12:44:48.776464 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:44:48 crc kubenswrapper[4865]: I0123 12:44:48.777053 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:44:48 crc kubenswrapper[4865]: I0123 12:44:48.777115 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:44:48 crc kubenswrapper[4865]: I0123 12:44:48.778275 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:44:48 crc kubenswrapper[4865]: I0123 12:44:48.778389 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" gracePeriod=600 Jan 23 12:44:50 crc kubenswrapper[4865]: I0123 12:44:50.085249 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" exitCode=0 Jan 23 12:44:50 crc kubenswrapper[4865]: I0123 12:44:50.085412 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8"} Jan 23 12:44:50 crc kubenswrapper[4865]: I0123 12:44:50.085501 4865 scope.go:117] "RemoveContainer" containerID="ea4df3191f9b072e2daa3f3b9785980c394ff89c94c943c5e65a746cd902c595" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.154470 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2"] Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.156859 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.163533 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.163563 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.195186 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2"] Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.325751 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b84e6b-8a42-48fa-9ddb-bf80590e260c-secret-volume\") pod \"collect-profiles-29486205-77lt2\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.325889 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b84e6b-8a42-48fa-9ddb-bf80590e260c-config-volume\") pod \"collect-profiles-29486205-77lt2\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.325961 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7jxx\" (UniqueName: \"kubernetes.io/projected/70b84e6b-8a42-48fa-9ddb-bf80590e260c-kube-api-access-f7jxx\") pod \"collect-profiles-29486205-77lt2\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.427973 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b84e6b-8a42-48fa-9ddb-bf80590e260c-secret-volume\") pod \"collect-profiles-29486205-77lt2\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.429589 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b84e6b-8a42-48fa-9ddb-bf80590e260c-config-volume\") pod \"collect-profiles-29486205-77lt2\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.429768 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7jxx\" (UniqueName: \"kubernetes.io/projected/70b84e6b-8a42-48fa-9ddb-bf80590e260c-kube-api-access-f7jxx\") pod \"collect-profiles-29486205-77lt2\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.431170 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b84e6b-8a42-48fa-9ddb-bf80590e260c-config-volume\") pod \"collect-profiles-29486205-77lt2\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.449524 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b84e6b-8a42-48fa-9ddb-bf80590e260c-secret-volume\") pod \"collect-profiles-29486205-77lt2\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.450442 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7jxx\" (UniqueName: \"kubernetes.io/projected/70b84e6b-8a42-48fa-9ddb-bf80590e260c-kube-api-access-f7jxx\") pod \"collect-profiles-29486205-77lt2\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: I0123 12:45:00.492816 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:00 crc kubenswrapper[4865]: E0123 12:45:00.611305 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:45:00 crc kubenswrapper[4865]: E0123 12:45:00.709919 4865 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:45:00 crc kubenswrapper[4865]: E0123 12:45:00.710307 4865 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb" Jan 23 12:45:00 crc kubenswrapper[4865]: E0123 12:45:00.711254 4865 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:38.102.83.132:5001/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nnnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(6083e716-8bbf-40bf-abdd-87e865a2f7ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 12:45:00 crc kubenswrapper[4865]: E0123 12:45:00.714916 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" Jan 23 12:45:01 crc kubenswrapper[4865]: I0123 12:45:01.101535 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2"] Jan 23 12:45:01 crc kubenswrapper[4865]: W0123 12:45:01.103064 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70b84e6b_8a42_48fa_9ddb_bf80590e260c.slice/crio-c5345b836ad8a968c6955b4283b95c2fd8daa63efb4e56a7006255f194a1b952 WatchSource:0}: Error finding container c5345b836ad8a968c6955b4283b95c2fd8daa63efb4e56a7006255f194a1b952: Status 404 returned error can't find the container with id c5345b836ad8a968c6955b4283b95c2fd8daa63efb4e56a7006255f194a1b952 Jan 23 12:45:01 crc kubenswrapper[4865]: I0123 12:45:01.203206 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:45:01 crc kubenswrapper[4865]: E0123 12:45:01.203824 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:45:01 crc kubenswrapper[4865]: I0123 12:45:01.208521 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" event={"ID":"70b84e6b-8a42-48fa-9ddb-bf80590e260c","Type":"ContainerStarted","Data":"c5345b836ad8a968c6955b4283b95c2fd8daa63efb4e56a7006255f194a1b952"} Jan 23 12:45:01 crc kubenswrapper[4865]: E0123 12:45:01.211117 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" Jan 23 12:45:02 crc kubenswrapper[4865]: I0123 12:45:02.216237 4865 generic.go:334] "Generic (PLEG): container finished" podID="70b84e6b-8a42-48fa-9ddb-bf80590e260c" containerID="9caf0f89af9aa778735f05dd8dcba248d45def40c9485a72545596aa9364ae39" exitCode=0 Jan 23 12:45:02 crc kubenswrapper[4865]: I0123 12:45:02.216302 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" event={"ID":"70b84e6b-8a42-48fa-9ddb-bf80590e260c","Type":"ContainerDied","Data":"9caf0f89af9aa778735f05dd8dcba248d45def40c9485a72545596aa9364ae39"} Jan 23 12:45:03 crc kubenswrapper[4865]: I0123 12:45:03.547537 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:03 crc kubenswrapper[4865]: I0123 12:45:03.696082 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b84e6b-8a42-48fa-9ddb-bf80590e260c-config-volume\") pod \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " Jan 23 12:45:03 crc kubenswrapper[4865]: I0123 12:45:03.696568 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b84e6b-8a42-48fa-9ddb-bf80590e260c-secret-volume\") pod \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " Jan 23 12:45:03 crc kubenswrapper[4865]: I0123 12:45:03.696650 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7jxx\" (UniqueName: \"kubernetes.io/projected/70b84e6b-8a42-48fa-9ddb-bf80590e260c-kube-api-access-f7jxx\") pod \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\" (UID: \"70b84e6b-8a42-48fa-9ddb-bf80590e260c\") " Jan 23 12:45:03 crc kubenswrapper[4865]: I0123 12:45:03.697215 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70b84e6b-8a42-48fa-9ddb-bf80590e260c-config-volume" (OuterVolumeSpecName: "config-volume") pod "70b84e6b-8a42-48fa-9ddb-bf80590e260c" (UID: "70b84e6b-8a42-48fa-9ddb-bf80590e260c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:45:03 crc kubenswrapper[4865]: I0123 12:45:03.702451 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b84e6b-8a42-48fa-9ddb-bf80590e260c-kube-api-access-f7jxx" (OuterVolumeSpecName: "kube-api-access-f7jxx") pod "70b84e6b-8a42-48fa-9ddb-bf80590e260c" (UID: "70b84e6b-8a42-48fa-9ddb-bf80590e260c"). InnerVolumeSpecName "kube-api-access-f7jxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:45:03 crc kubenswrapper[4865]: I0123 12:45:03.703079 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70b84e6b-8a42-48fa-9ddb-bf80590e260c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "70b84e6b-8a42-48fa-9ddb-bf80590e260c" (UID: "70b84e6b-8a42-48fa-9ddb-bf80590e260c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:45:03 crc kubenswrapper[4865]: I0123 12:45:03.798694 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7jxx\" (UniqueName: \"kubernetes.io/projected/70b84e6b-8a42-48fa-9ddb-bf80590e260c-kube-api-access-f7jxx\") on node \"crc\" DevicePath \"\"" Jan 23 12:45:03 crc kubenswrapper[4865]: I0123 12:45:03.798727 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70b84e6b-8a42-48fa-9ddb-bf80590e260c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 12:45:03 crc kubenswrapper[4865]: I0123 12:45:03.798738 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/70b84e6b-8a42-48fa-9ddb-bf80590e260c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 12:45:04 crc kubenswrapper[4865]: I0123 12:45:04.234115 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" event={"ID":"70b84e6b-8a42-48fa-9ddb-bf80590e260c","Type":"ContainerDied","Data":"c5345b836ad8a968c6955b4283b95c2fd8daa63efb4e56a7006255f194a1b952"} Jan 23 12:45:04 crc kubenswrapper[4865]: I0123 12:45:04.234161 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5345b836ad8a968c6955b4283b95c2fd8daa63efb4e56a7006255f194a1b952" Jan 23 12:45:04 crc kubenswrapper[4865]: I0123 12:45:04.234489 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486205-77lt2" Jan 23 12:45:04 crc kubenswrapper[4865]: I0123 12:45:04.643237 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz"] Jan 23 12:45:04 crc kubenswrapper[4865]: I0123 12:45:04.651754 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486160-wzvsz"] Jan 23 12:45:06 crc kubenswrapper[4865]: I0123 12:45:06.133794 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db41c5d0-dcfc-47eb-a67b-1d4875fafcfd" path="/var/lib/kubelet/pods/db41c5d0-dcfc-47eb-a67b-1d4875fafcfd/volumes" Jan 23 12:45:14 crc kubenswrapper[4865]: I0123 12:45:14.119754 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:45:14 crc kubenswrapper[4865]: E0123 12:45:14.123963 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:45:14 crc kubenswrapper[4865]: I0123 12:45:14.188570 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 23 12:45:15 crc kubenswrapper[4865]: I0123 12:45:15.341002 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"6083e716-8bbf-40bf-abdd-87e865a2f7ae","Type":"ContainerStarted","Data":"318fd508c84d3d6ab78b7cb8780af4462f79c7afaee925e3734567fc1e2961dd"} Jan 23 12:45:15 crc kubenswrapper[4865]: I0123 12:45:15.360105 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=3.85963506 podStartE2EDuration="59.360090612s" podCreationTimestamp="2026-01-23 12:44:16 +0000 UTC" firstStartedPulling="2026-01-23 12:44:18.685439528 +0000 UTC m=+3102.854511754" lastFinishedPulling="2026-01-23 12:45:14.18589508 +0000 UTC m=+3158.354967306" observedRunningTime="2026-01-23 12:45:15.35876777 +0000 UTC m=+3159.527839996" watchObservedRunningTime="2026-01-23 12:45:15.360090612 +0000 UTC m=+3159.529162838" Jan 23 12:45:28 crc kubenswrapper[4865]: I0123 12:45:28.118357 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:45:28 crc kubenswrapper[4865]: E0123 12:45:28.120801 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:45:32 crc kubenswrapper[4865]: I0123 12:45:32.773384 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 12:45:42 crc kubenswrapper[4865]: I0123 12:45:42.118299 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:45:42 crc kubenswrapper[4865]: E0123 12:45:42.119266 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:45:54 crc kubenswrapper[4865]: I0123 12:45:54.164944 4865 scope.go:117] "RemoveContainer" containerID="54279a3cfbb7024e5c9217d0603a39e6672d78f5bade43871dd3423a2e98c57c" Jan 23 12:45:55 crc kubenswrapper[4865]: I0123 12:45:55.123761 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:45:55 crc kubenswrapper[4865]: E0123 12:45:55.124315 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:46:07 crc kubenswrapper[4865]: I0123 12:46:07.118549 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:46:07 crc kubenswrapper[4865]: E0123 12:46:07.119233 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:46:20 crc kubenswrapper[4865]: I0123 12:46:20.117858 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:46:20 crc kubenswrapper[4865]: E0123 12:46:20.118534 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:46:35 crc kubenswrapper[4865]: I0123 12:46:35.119937 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:46:35 crc kubenswrapper[4865]: E0123 12:46:35.120737 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:46:46 crc kubenswrapper[4865]: I0123 12:46:46.123638 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:46:46 crc kubenswrapper[4865]: E0123 12:46:46.124523 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:46:58 crc kubenswrapper[4865]: I0123 12:46:58.118808 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:46:58 crc kubenswrapper[4865]: E0123 12:46:58.119687 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:47:12 crc kubenswrapper[4865]: I0123 12:47:12.120706 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:47:12 crc kubenswrapper[4865]: E0123 12:47:12.121343 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:47:23 crc kubenswrapper[4865]: I0123 12:47:23.118662 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:47:23 crc kubenswrapper[4865]: E0123 12:47:23.119652 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:47:37 crc kubenswrapper[4865]: I0123 12:47:37.120357 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:47:37 crc kubenswrapper[4865]: E0123 12:47:37.120989 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:47:51 crc kubenswrapper[4865]: I0123 12:47:51.119008 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:47:51 crc kubenswrapper[4865]: E0123 12:47:51.119840 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:48:06 crc kubenswrapper[4865]: I0123 12:48:06.124452 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:48:06 crc kubenswrapper[4865]: E0123 12:48:06.125181 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:48:12 crc kubenswrapper[4865]: I0123 12:48:12.872918 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dnnrk"] Jan 23 12:48:12 crc kubenswrapper[4865]: E0123 12:48:12.874143 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b84e6b-8a42-48fa-9ddb-bf80590e260c" containerName="collect-profiles" Jan 23 12:48:12 crc kubenswrapper[4865]: I0123 12:48:12.874158 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b84e6b-8a42-48fa-9ddb-bf80590e260c" containerName="collect-profiles" Jan 23 12:48:12 crc kubenswrapper[4865]: I0123 12:48:12.874355 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b84e6b-8a42-48fa-9ddb-bf80590e260c" containerName="collect-profiles" Jan 23 12:48:12 crc kubenswrapper[4865]: I0123 12:48:12.875782 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:12 crc kubenswrapper[4865]: I0123 12:48:12.925745 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnnrk"] Jan 23 12:48:12 crc kubenswrapper[4865]: I0123 12:48:12.928729 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfd2s\" (UniqueName: \"kubernetes.io/projected/24893163-c92d-491f-a980-b68a2597ec9b-kube-api-access-mfd2s\") pod \"certified-operators-dnnrk\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:12 crc kubenswrapper[4865]: I0123 12:48:12.929365 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-catalog-content\") pod \"certified-operators-dnnrk\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:12 crc kubenswrapper[4865]: I0123 12:48:12.929422 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-utilities\") pod \"certified-operators-dnnrk\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:13 crc kubenswrapper[4865]: I0123 12:48:13.030771 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-catalog-content\") pod \"certified-operators-dnnrk\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:13 crc kubenswrapper[4865]: I0123 12:48:13.030823 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-utilities\") pod \"certified-operators-dnnrk\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:13 crc kubenswrapper[4865]: I0123 12:48:13.030867 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfd2s\" (UniqueName: \"kubernetes.io/projected/24893163-c92d-491f-a980-b68a2597ec9b-kube-api-access-mfd2s\") pod \"certified-operators-dnnrk\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:13 crc kubenswrapper[4865]: I0123 12:48:13.031228 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-catalog-content\") pod \"certified-operators-dnnrk\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:13 crc kubenswrapper[4865]: I0123 12:48:13.031466 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-utilities\") pod \"certified-operators-dnnrk\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:13 crc kubenswrapper[4865]: I0123 12:48:13.061101 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfd2s\" (UniqueName: \"kubernetes.io/projected/24893163-c92d-491f-a980-b68a2597ec9b-kube-api-access-mfd2s\") pod \"certified-operators-dnnrk\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:13 crc kubenswrapper[4865]: I0123 12:48:13.211329 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:15 crc kubenswrapper[4865]: I0123 12:48:15.746919 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnnrk"] Jan 23 12:48:15 crc kubenswrapper[4865]: I0123 12:48:15.978205 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnrk" event={"ID":"24893163-c92d-491f-a980-b68a2597ec9b","Type":"ContainerStarted","Data":"d34be25e3fecd2dad1f1bbe7a00e27b002e589ecbfdab1cb0293673addc06d08"} Jan 23 12:48:16 crc kubenswrapper[4865]: I0123 12:48:16.996124 4865 generic.go:334] "Generic (PLEG): container finished" podID="24893163-c92d-491f-a980-b68a2597ec9b" containerID="2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972" exitCode=0 Jan 23 12:48:16 crc kubenswrapper[4865]: I0123 12:48:16.996326 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnrk" event={"ID":"24893163-c92d-491f-a980-b68a2597ec9b","Type":"ContainerDied","Data":"2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972"} Jan 23 12:48:17 crc kubenswrapper[4865]: I0123 12:48:17.119169 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:48:17 crc kubenswrapper[4865]: E0123 12:48:17.119366 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:48:20 crc kubenswrapper[4865]: I0123 12:48:20.022336 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnrk" event={"ID":"24893163-c92d-491f-a980-b68a2597ec9b","Type":"ContainerStarted","Data":"0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b"} Jan 23 12:48:21 crc kubenswrapper[4865]: I0123 12:48:21.032767 4865 generic.go:334] "Generic (PLEG): container finished" podID="24893163-c92d-491f-a980-b68a2597ec9b" containerID="0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b" exitCode=0 Jan 23 12:48:21 crc kubenswrapper[4865]: I0123 12:48:21.032997 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnrk" event={"ID":"24893163-c92d-491f-a980-b68a2597ec9b","Type":"ContainerDied","Data":"0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b"} Jan 23 12:48:23 crc kubenswrapper[4865]: I0123 12:48:23.049501 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnrk" event={"ID":"24893163-c92d-491f-a980-b68a2597ec9b","Type":"ContainerStarted","Data":"450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b"} Jan 23 12:48:23 crc kubenswrapper[4865]: I0123 12:48:23.064068 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dnnrk" podStartSLOduration=5.393340589 podStartE2EDuration="11.064051136s" podCreationTimestamp="2026-01-23 12:48:12 +0000 UTC" firstStartedPulling="2026-01-23 12:48:16.998579546 +0000 UTC m=+3341.167651772" lastFinishedPulling="2026-01-23 12:48:22.669290093 +0000 UTC m=+3346.838362319" observedRunningTime="2026-01-23 12:48:23.062714563 +0000 UTC m=+3347.231786789" watchObservedRunningTime="2026-01-23 12:48:23.064051136 +0000 UTC m=+3347.233123362" Jan 23 12:48:23 crc kubenswrapper[4865]: I0123 12:48:23.212375 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:23 crc kubenswrapper[4865]: I0123 12:48:23.212768 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:24 crc kubenswrapper[4865]: I0123 12:48:24.265886 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dnnrk" podUID="24893163-c92d-491f-a980-b68a2597ec9b" containerName="registry-server" probeResult="failure" output=< Jan 23 12:48:24 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:48:24 crc kubenswrapper[4865]: > Jan 23 12:48:32 crc kubenswrapper[4865]: I0123 12:48:32.118793 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:48:32 crc kubenswrapper[4865]: E0123 12:48:32.119716 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:48:33 crc kubenswrapper[4865]: I0123 12:48:33.253342 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:33 crc kubenswrapper[4865]: I0123 12:48:33.321066 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:33 crc kubenswrapper[4865]: I0123 12:48:33.507794 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnnrk"] Jan 23 12:48:35 crc kubenswrapper[4865]: I0123 12:48:35.165974 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dnnrk" podUID="24893163-c92d-491f-a980-b68a2597ec9b" containerName="registry-server" containerID="cri-o://450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b" gracePeriod=2 Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.080045 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.176312 4865 generic.go:334] "Generic (PLEG): container finished" podID="24893163-c92d-491f-a980-b68a2597ec9b" containerID="450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b" exitCode=0 Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.176551 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnrk" event={"ID":"24893163-c92d-491f-a980-b68a2597ec9b","Type":"ContainerDied","Data":"450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b"} Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.176577 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnrk" event={"ID":"24893163-c92d-491f-a980-b68a2597ec9b","Type":"ContainerDied","Data":"d34be25e3fecd2dad1f1bbe7a00e27b002e589ecbfdab1cb0293673addc06d08"} Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.176593 4865 scope.go:117] "RemoveContainer" containerID="450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.176731 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnnrk" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.202562 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-catalog-content\") pod \"24893163-c92d-491f-a980-b68a2597ec9b\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.202619 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfd2s\" (UniqueName: \"kubernetes.io/projected/24893163-c92d-491f-a980-b68a2597ec9b-kube-api-access-mfd2s\") pod \"24893163-c92d-491f-a980-b68a2597ec9b\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.202799 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-utilities\") pod \"24893163-c92d-491f-a980-b68a2597ec9b\" (UID: \"24893163-c92d-491f-a980-b68a2597ec9b\") " Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.212192 4865 scope.go:117] "RemoveContainer" containerID="0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.213044 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-utilities" (OuterVolumeSpecName: "utilities") pod "24893163-c92d-491f-a980-b68a2597ec9b" (UID: "24893163-c92d-491f-a980-b68a2597ec9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.249519 4865 scope.go:117] "RemoveContainer" containerID="2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.300626 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24893163-c92d-491f-a980-b68a2597ec9b-kube-api-access-mfd2s" (OuterVolumeSpecName: "kube-api-access-mfd2s") pod "24893163-c92d-491f-a980-b68a2597ec9b" (UID: "24893163-c92d-491f-a980-b68a2597ec9b"). InnerVolumeSpecName "kube-api-access-mfd2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.304768 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.304793 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfd2s\" (UniqueName: \"kubernetes.io/projected/24893163-c92d-491f-a980-b68a2597ec9b-kube-api-access-mfd2s\") on node \"crc\" DevicePath \"\"" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.305952 4865 scope.go:117] "RemoveContainer" containerID="450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b" Jan 23 12:48:36 crc kubenswrapper[4865]: E0123 12:48:36.306810 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b\": container with ID starting with 450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b not found: ID does not exist" containerID="450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.306839 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b"} err="failed to get container status \"450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b\": rpc error: code = NotFound desc = could not find container \"450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b\": container with ID starting with 450b6c404f4113c457777c14a15f8de5ca3d738488a978edd83c89d833dca62b not found: ID does not exist" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.306860 4865 scope.go:117] "RemoveContainer" containerID="0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b" Jan 23 12:48:36 crc kubenswrapper[4865]: E0123 12:48:36.307438 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b\": container with ID starting with 0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b not found: ID does not exist" containerID="0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.307483 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b"} err="failed to get container status \"0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b\": rpc error: code = NotFound desc = could not find container \"0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b\": container with ID starting with 0e784f98ddc4e4a20413b009d8b0ba01bbb9ffa9b17320ad1a8b0d061906fd4b not found: ID does not exist" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.307510 4865 scope.go:117] "RemoveContainer" containerID="2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.307905 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24893163-c92d-491f-a980-b68a2597ec9b" (UID: "24893163-c92d-491f-a980-b68a2597ec9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:48:36 crc kubenswrapper[4865]: E0123 12:48:36.308004 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972\": container with ID starting with 2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972 not found: ID does not exist" containerID="2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.308028 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972"} err="failed to get container status \"2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972\": rpc error: code = NotFound desc = could not find container \"2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972\": container with ID starting with 2bb4873b1e3ee4e0304db9e1c64ff74c4eb0effca89b7cf45f851303b0734972 not found: ID does not exist" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.407085 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24893163-c92d-491f-a980-b68a2597ec9b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.513315 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnnrk"] Jan 23 12:48:36 crc kubenswrapper[4865]: I0123 12:48:36.523396 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dnnrk"] Jan 23 12:48:38 crc kubenswrapper[4865]: I0123 12:48:38.131383 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24893163-c92d-491f-a980-b68a2597ec9b" path="/var/lib/kubelet/pods/24893163-c92d-491f-a980-b68a2597ec9b/volumes" Jan 23 12:48:44 crc kubenswrapper[4865]: I0123 12:48:44.118096 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:48:44 crc kubenswrapper[4865]: E0123 12:48:44.119440 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:48:55 crc kubenswrapper[4865]: I0123 12:48:55.118771 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:48:55 crc kubenswrapper[4865]: E0123 12:48:55.119544 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.010140 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jx54p"] Jan 23 12:48:56 crc kubenswrapper[4865]: E0123 12:48:56.010823 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24893163-c92d-491f-a980-b68a2597ec9b" containerName="extract-content" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.010836 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="24893163-c92d-491f-a980-b68a2597ec9b" containerName="extract-content" Jan 23 12:48:56 crc kubenswrapper[4865]: E0123 12:48:56.010848 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24893163-c92d-491f-a980-b68a2597ec9b" containerName="extract-utilities" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.010855 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="24893163-c92d-491f-a980-b68a2597ec9b" containerName="extract-utilities" Jan 23 12:48:56 crc kubenswrapper[4865]: E0123 12:48:56.010872 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24893163-c92d-491f-a980-b68a2597ec9b" containerName="registry-server" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.010879 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="24893163-c92d-491f-a980-b68a2597ec9b" containerName="registry-server" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.011044 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="24893163-c92d-491f-a980-b68a2597ec9b" containerName="registry-server" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.013182 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.041657 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jx54p"] Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.137041 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-catalog-content\") pod \"community-operators-jx54p\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.137108 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmd4\" (UniqueName: \"kubernetes.io/projected/feb720f4-a0e1-4d1f-98ad-df69596b6b50-kube-api-access-vkmd4\") pod \"community-operators-jx54p\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.137223 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-utilities\") pod \"community-operators-jx54p\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.239823 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-catalog-content\") pod \"community-operators-jx54p\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.239903 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkmd4\" (UniqueName: \"kubernetes.io/projected/feb720f4-a0e1-4d1f-98ad-df69596b6b50-kube-api-access-vkmd4\") pod \"community-operators-jx54p\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.239982 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-utilities\") pod \"community-operators-jx54p\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.240527 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-utilities\") pod \"community-operators-jx54p\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.242218 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-catalog-content\") pod \"community-operators-jx54p\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.265319 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkmd4\" (UniqueName: \"kubernetes.io/projected/feb720f4-a0e1-4d1f-98ad-df69596b6b50-kube-api-access-vkmd4\") pod \"community-operators-jx54p\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.332931 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:48:56 crc kubenswrapper[4865]: I0123 12:48:56.876692 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jx54p"] Jan 23 12:48:57 crc kubenswrapper[4865]: I0123 12:48:57.347905 4865 generic.go:334] "Generic (PLEG): container finished" podID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerID="0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec" exitCode=0 Jan 23 12:48:57 crc kubenswrapper[4865]: I0123 12:48:57.347948 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx54p" event={"ID":"feb720f4-a0e1-4d1f-98ad-df69596b6b50","Type":"ContainerDied","Data":"0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec"} Jan 23 12:48:57 crc kubenswrapper[4865]: I0123 12:48:57.347973 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx54p" event={"ID":"feb720f4-a0e1-4d1f-98ad-df69596b6b50","Type":"ContainerStarted","Data":"fa9dc29ee2bafb24dbae1dbe0b66aa30cbd83a3492ca6f1dfe830c165dd8cedc"} Jan 23 12:48:59 crc kubenswrapper[4865]: I0123 12:48:59.362648 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx54p" event={"ID":"feb720f4-a0e1-4d1f-98ad-df69596b6b50","Type":"ContainerStarted","Data":"3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f"} Jan 23 12:49:02 crc kubenswrapper[4865]: I0123 12:49:02.977364 4865 generic.go:334] "Generic (PLEG): container finished" podID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerID="3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f" exitCode=0 Jan 23 12:49:02 crc kubenswrapper[4865]: I0123 12:49:02.977436 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx54p" event={"ID":"feb720f4-a0e1-4d1f-98ad-df69596b6b50","Type":"ContainerDied","Data":"3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f"} Jan 23 12:49:05 crc kubenswrapper[4865]: I0123 12:49:05.017999 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx54p" event={"ID":"feb720f4-a0e1-4d1f-98ad-df69596b6b50","Type":"ContainerStarted","Data":"9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a"} Jan 23 12:49:05 crc kubenswrapper[4865]: I0123 12:49:05.044376 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jx54p" podStartSLOduration=3.452835172 podStartE2EDuration="10.04435859s" podCreationTimestamp="2026-01-23 12:48:55 +0000 UTC" firstStartedPulling="2026-01-23 12:48:57.350023674 +0000 UTC m=+3381.519095900" lastFinishedPulling="2026-01-23 12:49:03.941547092 +0000 UTC m=+3388.110619318" observedRunningTime="2026-01-23 12:49:05.038745013 +0000 UTC m=+3389.207817239" watchObservedRunningTime="2026-01-23 12:49:05.04435859 +0000 UTC m=+3389.213430816" Jan 23 12:49:06 crc kubenswrapper[4865]: I0123 12:49:06.124583 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:49:06 crc kubenswrapper[4865]: E0123 12:49:06.125004 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:49:06 crc kubenswrapper[4865]: I0123 12:49:06.333170 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:49:06 crc kubenswrapper[4865]: I0123 12:49:06.333354 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:49:07 crc kubenswrapper[4865]: I0123 12:49:07.383441 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jx54p" podUID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerName="registry-server" probeResult="failure" output=< Jan 23 12:49:07 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:49:07 crc kubenswrapper[4865]: > Jan 23 12:49:16 crc kubenswrapper[4865]: I0123 12:49:16.401478 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:49:16 crc kubenswrapper[4865]: I0123 12:49:16.456029 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:49:16 crc kubenswrapper[4865]: I0123 12:49:16.635703 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jx54p"] Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.118625 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:49:18 crc kubenswrapper[4865]: E0123 12:49:18.119069 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.134833 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jx54p" podUID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerName="registry-server" containerID="cri-o://9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a" gracePeriod=2 Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.715952 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.804869 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-utilities\") pod \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.805092 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkmd4\" (UniqueName: \"kubernetes.io/projected/feb720f4-a0e1-4d1f-98ad-df69596b6b50-kube-api-access-vkmd4\") pod \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.805123 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-catalog-content\") pod \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\" (UID: \"feb720f4-a0e1-4d1f-98ad-df69596b6b50\") " Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.805866 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-utilities" (OuterVolumeSpecName: "utilities") pod "feb720f4-a0e1-4d1f-98ad-df69596b6b50" (UID: "feb720f4-a0e1-4d1f-98ad-df69596b6b50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.806015 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.832262 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feb720f4-a0e1-4d1f-98ad-df69596b6b50-kube-api-access-vkmd4" (OuterVolumeSpecName: "kube-api-access-vkmd4") pod "feb720f4-a0e1-4d1f-98ad-df69596b6b50" (UID: "feb720f4-a0e1-4d1f-98ad-df69596b6b50"). InnerVolumeSpecName "kube-api-access-vkmd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.890680 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "feb720f4-a0e1-4d1f-98ad-df69596b6b50" (UID: "feb720f4-a0e1-4d1f-98ad-df69596b6b50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.907713 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkmd4\" (UniqueName: \"kubernetes.io/projected/feb720f4-a0e1-4d1f-98ad-df69596b6b50-kube-api-access-vkmd4\") on node \"crc\" DevicePath \"\"" Jan 23 12:49:18 crc kubenswrapper[4865]: I0123 12:49:18.908011 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feb720f4-a0e1-4d1f-98ad-df69596b6b50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.151607 4865 generic.go:334] "Generic (PLEG): container finished" podID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerID="9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a" exitCode=0 Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.151673 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx54p" event={"ID":"feb720f4-a0e1-4d1f-98ad-df69596b6b50","Type":"ContainerDied","Data":"9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a"} Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.151691 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jx54p" Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.151706 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jx54p" event={"ID":"feb720f4-a0e1-4d1f-98ad-df69596b6b50","Type":"ContainerDied","Data":"fa9dc29ee2bafb24dbae1dbe0b66aa30cbd83a3492ca6f1dfe830c165dd8cedc"} Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.151729 4865 scope.go:117] "RemoveContainer" containerID="9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a" Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.198443 4865 scope.go:117] "RemoveContainer" containerID="3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f" Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.210540 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jx54p"] Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.226707 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jx54p"] Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.252428 4865 scope.go:117] "RemoveContainer" containerID="0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec" Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.300319 4865 scope.go:117] "RemoveContainer" containerID="9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a" Jan 23 12:49:19 crc kubenswrapper[4865]: E0123 12:49:19.300914 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a\": container with ID starting with 9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a not found: ID does not exist" containerID="9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a" Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.300968 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a"} err="failed to get container status \"9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a\": rpc error: code = NotFound desc = could not find container \"9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a\": container with ID starting with 9d016ed3d37320dac5bbb9197fad3a20c9042e71021c1c66a0e455b92ab47e4a not found: ID does not exist" Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.301003 4865 scope.go:117] "RemoveContainer" containerID="3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f" Jan 23 12:49:19 crc kubenswrapper[4865]: E0123 12:49:19.302135 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f\": container with ID starting with 3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f not found: ID does not exist" containerID="3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f" Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.302180 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f"} err="failed to get container status \"3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f\": rpc error: code = NotFound desc = could not find container \"3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f\": container with ID starting with 3520cd5f417e87cf9802dd2d7b3a3fc23235fd57b7e8f1a8f454a9516eed2d7f not found: ID does not exist" Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.302207 4865 scope.go:117] "RemoveContainer" containerID="0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec" Jan 23 12:49:19 crc kubenswrapper[4865]: E0123 12:49:19.302494 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec\": container with ID starting with 0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec not found: ID does not exist" containerID="0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec" Jan 23 12:49:19 crc kubenswrapper[4865]: I0123 12:49:19.302524 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec"} err="failed to get container status \"0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec\": rpc error: code = NotFound desc = could not find container \"0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec\": container with ID starting with 0d2e6c9bb04452ae31d411179a459cceecebe468f06bb9289eea63303cadebec not found: ID does not exist" Jan 23 12:49:20 crc kubenswrapper[4865]: I0123 12:49:20.133134 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" path="/var/lib/kubelet/pods/feb720f4-a0e1-4d1f-98ad-df69596b6b50/volumes" Jan 23 12:49:29 crc kubenswrapper[4865]: I0123 12:49:29.119009 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:49:29 crc kubenswrapper[4865]: E0123 12:49:29.121022 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:49:41 crc kubenswrapper[4865]: I0123 12:49:41.117630 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:49:41 crc kubenswrapper[4865]: E0123 12:49:41.118280 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:49:53 crc kubenswrapper[4865]: I0123 12:49:53.119091 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:49:53 crc kubenswrapper[4865]: I0123 12:49:53.549092 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"252afbbfcef2f8984487f9b8a509819ceebe38ec76baa0fba3638a26ddef44d8"} Jan 23 12:51:17 crc kubenswrapper[4865]: E0123 12:51:17.209634 4865 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.091s" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.461299 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fc4h8"] Jan 23 12:51:26 crc kubenswrapper[4865]: E0123 12:51:26.464298 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerName="registry-server" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.464389 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerName="registry-server" Jan 23 12:51:26 crc kubenswrapper[4865]: E0123 12:51:26.464408 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerName="extract-content" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.464415 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerName="extract-content" Jan 23 12:51:26 crc kubenswrapper[4865]: E0123 12:51:26.464650 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerName="extract-utilities" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.464710 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerName="extract-utilities" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.465015 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="feb720f4-a0e1-4d1f-98ad-df69596b6b50" containerName="registry-server" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.467981 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.626338 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-catalog-content\") pod \"redhat-marketplace-fc4h8\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.626449 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-utilities\") pod \"redhat-marketplace-fc4h8\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.626517 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sphkw\" (UniqueName: \"kubernetes.io/projected/aaf60cc3-36cd-449c-a995-85e3539d9014-kube-api-access-sphkw\") pod \"redhat-marketplace-fc4h8\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.629886 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fc4h8"] Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.728256 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-catalog-content\") pod \"redhat-marketplace-fc4h8\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.728343 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-utilities\") pod \"redhat-marketplace-fc4h8\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.728390 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sphkw\" (UniqueName: \"kubernetes.io/projected/aaf60cc3-36cd-449c-a995-85e3539d9014-kube-api-access-sphkw\") pod \"redhat-marketplace-fc4h8\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.730260 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-utilities\") pod \"redhat-marketplace-fc4h8\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.730744 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-catalog-content\") pod \"redhat-marketplace-fc4h8\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.768329 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sphkw\" (UniqueName: \"kubernetes.io/projected/aaf60cc3-36cd-449c-a995-85e3539d9014-kube-api-access-sphkw\") pod \"redhat-marketplace-fc4h8\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:26 crc kubenswrapper[4865]: I0123 12:51:26.846968 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:27 crc kubenswrapper[4865]: I0123 12:51:27.668750 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fc4h8"] Jan 23 12:51:27 crc kubenswrapper[4865]: W0123 12:51:27.736407 4865 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaf60cc3_36cd_449c_a995_85e3539d9014.slice/crio-aa721fd19a9c4e7431e087cb9bab5e1f548c4caeb7df4957717591c86108c2eb WatchSource:0}: Error finding container aa721fd19a9c4e7431e087cb9bab5e1f548c4caeb7df4957717591c86108c2eb: Status 404 returned error can't find the container with id aa721fd19a9c4e7431e087cb9bab5e1f548c4caeb7df4957717591c86108c2eb Jan 23 12:51:28 crc kubenswrapper[4865]: I0123 12:51:28.377136 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fc4h8" event={"ID":"aaf60cc3-36cd-449c-a995-85e3539d9014","Type":"ContainerDied","Data":"c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d"} Jan 23 12:51:28 crc kubenswrapper[4865]: I0123 12:51:28.375630 4865 generic.go:334] "Generic (PLEG): container finished" podID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerID="c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d" exitCode=0 Jan 23 12:51:28 crc kubenswrapper[4865]: I0123 12:51:28.378128 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fc4h8" event={"ID":"aaf60cc3-36cd-449c-a995-85e3539d9014","Type":"ContainerStarted","Data":"aa721fd19a9c4e7431e087cb9bab5e1f548c4caeb7df4957717591c86108c2eb"} Jan 23 12:51:28 crc kubenswrapper[4865]: I0123 12:51:28.392591 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 12:51:29 crc kubenswrapper[4865]: I0123 12:51:29.391144 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fc4h8" event={"ID":"aaf60cc3-36cd-449c-a995-85e3539d9014","Type":"ContainerStarted","Data":"09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299"} Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.408484 4865 generic.go:334] "Generic (PLEG): container finished" podID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerID="09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299" exitCode=0 Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.408530 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fc4h8" event={"ID":"aaf60cc3-36cd-449c-a995-85e3539d9014","Type":"ContainerDied","Data":"09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299"} Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.580267 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d9nfz"] Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.583682 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.604682 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9nfz"] Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.757461 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-utilities\") pod \"redhat-operators-d9nfz\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.757545 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nwpc\" (UniqueName: \"kubernetes.io/projected/f931330b-23a2-4304-b53f-0fd2a2fd53cb-kube-api-access-4nwpc\") pod \"redhat-operators-d9nfz\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.758099 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-catalog-content\") pod \"redhat-operators-d9nfz\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.861013 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-utilities\") pod \"redhat-operators-d9nfz\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.861127 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nwpc\" (UniqueName: \"kubernetes.io/projected/f931330b-23a2-4304-b53f-0fd2a2fd53cb-kube-api-access-4nwpc\") pod \"redhat-operators-d9nfz\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.861229 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-catalog-content\") pod \"redhat-operators-d9nfz\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.861953 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-utilities\") pod \"redhat-operators-d9nfz\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.862162 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-catalog-content\") pod \"redhat-operators-d9nfz\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.906183 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nwpc\" (UniqueName: \"kubernetes.io/projected/f931330b-23a2-4304-b53f-0fd2a2fd53cb-kube-api-access-4nwpc\") pod \"redhat-operators-d9nfz\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:31 crc kubenswrapper[4865]: I0123 12:51:31.911154 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:32 crc kubenswrapper[4865]: I0123 12:51:32.418471 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fc4h8" event={"ID":"aaf60cc3-36cd-449c-a995-85e3539d9014","Type":"ContainerStarted","Data":"ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944"} Jan 23 12:51:32 crc kubenswrapper[4865]: I0123 12:51:32.461550 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fc4h8" podStartSLOduration=3.008598761 podStartE2EDuration="6.450209753s" podCreationTimestamp="2026-01-23 12:51:26 +0000 UTC" firstStartedPulling="2026-01-23 12:51:28.376772511 +0000 UTC m=+3532.545844737" lastFinishedPulling="2026-01-23 12:51:31.818383503 +0000 UTC m=+3535.987455729" observedRunningTime="2026-01-23 12:51:32.441978742 +0000 UTC m=+3536.611050968" watchObservedRunningTime="2026-01-23 12:51:32.450209753 +0000 UTC m=+3536.619281979" Jan 23 12:51:32 crc kubenswrapper[4865]: I0123 12:51:32.479367 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9nfz"] Jan 23 12:51:33 crc kubenswrapper[4865]: I0123 12:51:33.429492 4865 generic.go:334] "Generic (PLEG): container finished" podID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerID="dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823" exitCode=0 Jan 23 12:51:33 crc kubenswrapper[4865]: I0123 12:51:33.430615 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nfz" event={"ID":"f931330b-23a2-4304-b53f-0fd2a2fd53cb","Type":"ContainerDied","Data":"dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823"} Jan 23 12:51:33 crc kubenswrapper[4865]: I0123 12:51:33.430639 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nfz" event={"ID":"f931330b-23a2-4304-b53f-0fd2a2fd53cb","Type":"ContainerStarted","Data":"c16330d67ff830af7e6de61e0c0f6e57e471281ffbac879ff72144cd36040fd5"} Jan 23 12:51:34 crc kubenswrapper[4865]: I0123 12:51:34.439960 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nfz" event={"ID":"f931330b-23a2-4304-b53f-0fd2a2fd53cb","Type":"ContainerStarted","Data":"66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d"} Jan 23 12:51:36 crc kubenswrapper[4865]: I0123 12:51:36.848091 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:36 crc kubenswrapper[4865]: I0123 12:51:36.848981 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:38 crc kubenswrapper[4865]: I0123 12:51:38.436442 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fc4h8" podUID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerName="registry-server" probeResult="failure" output=< Jan 23 12:51:38 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:51:38 crc kubenswrapper[4865]: > Jan 23 12:51:39 crc kubenswrapper[4865]: I0123 12:51:39.479219 4865 generic.go:334] "Generic (PLEG): container finished" podID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerID="66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d" exitCode=0 Jan 23 12:51:39 crc kubenswrapper[4865]: I0123 12:51:39.479260 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nfz" event={"ID":"f931330b-23a2-4304-b53f-0fd2a2fd53cb","Type":"ContainerDied","Data":"66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d"} Jan 23 12:51:40 crc kubenswrapper[4865]: I0123 12:51:40.597662 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nfz" event={"ID":"f931330b-23a2-4304-b53f-0fd2a2fd53cb","Type":"ContainerStarted","Data":"c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec"} Jan 23 12:51:41 crc kubenswrapper[4865]: I0123 12:51:41.912138 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:41 crc kubenswrapper[4865]: I0123 12:51:41.912197 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:51:42 crc kubenswrapper[4865]: I0123 12:51:42.962501 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d9nfz" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="registry-server" probeResult="failure" output=< Jan 23 12:51:42 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:51:42 crc kubenswrapper[4865]: > Jan 23 12:51:47 crc kubenswrapper[4865]: I0123 12:51:47.069698 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:47 crc kubenswrapper[4865]: I0123 12:51:47.095004 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d9nfz" podStartSLOduration=9.592644517 podStartE2EDuration="16.09273975s" podCreationTimestamp="2026-01-23 12:51:31 +0000 UTC" firstStartedPulling="2026-01-23 12:51:33.431260868 +0000 UTC m=+3537.600333094" lastFinishedPulling="2026-01-23 12:51:39.931356101 +0000 UTC m=+3544.100428327" observedRunningTime="2026-01-23 12:51:40.635119861 +0000 UTC m=+3544.804192097" watchObservedRunningTime="2026-01-23 12:51:47.09273975 +0000 UTC m=+3551.261811966" Jan 23 12:51:47 crc kubenswrapper[4865]: I0123 12:51:47.123951 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:47 crc kubenswrapper[4865]: I0123 12:51:47.313555 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fc4h8"] Jan 23 12:51:48 crc kubenswrapper[4865]: I0123 12:51:48.680728 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fc4h8" podUID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerName="registry-server" containerID="cri-o://ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944" gracePeriod=2 Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.661103 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.688783 4865 generic.go:334] "Generic (PLEG): container finished" podID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerID="ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944" exitCode=0 Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.688822 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fc4h8" event={"ID":"aaf60cc3-36cd-449c-a995-85e3539d9014","Type":"ContainerDied","Data":"ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944"} Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.688847 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fc4h8" event={"ID":"aaf60cc3-36cd-449c-a995-85e3539d9014","Type":"ContainerDied","Data":"aa721fd19a9c4e7431e087cb9bab5e1f548c4caeb7df4957717591c86108c2eb"} Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.688874 4865 scope.go:117] "RemoveContainer" containerID="ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.688995 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fc4h8" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.732771 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-utilities\") pod \"aaf60cc3-36cd-449c-a995-85e3539d9014\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.733322 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sphkw\" (UniqueName: \"kubernetes.io/projected/aaf60cc3-36cd-449c-a995-85e3539d9014-kube-api-access-sphkw\") pod \"aaf60cc3-36cd-449c-a995-85e3539d9014\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.733638 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-catalog-content\") pod \"aaf60cc3-36cd-449c-a995-85e3539d9014\" (UID: \"aaf60cc3-36cd-449c-a995-85e3539d9014\") " Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.734146 4865 scope.go:117] "RemoveContainer" containerID="09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.736525 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-utilities" (OuterVolumeSpecName: "utilities") pod "aaf60cc3-36cd-449c-a995-85e3539d9014" (UID: "aaf60cc3-36cd-449c-a995-85e3539d9014"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.797524 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf60cc3-36cd-449c-a995-85e3539d9014-kube-api-access-sphkw" (OuterVolumeSpecName: "kube-api-access-sphkw") pod "aaf60cc3-36cd-449c-a995-85e3539d9014" (UID: "aaf60cc3-36cd-449c-a995-85e3539d9014"). InnerVolumeSpecName "kube-api-access-sphkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.825120 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaf60cc3-36cd-449c-a995-85e3539d9014" (UID: "aaf60cc3-36cd-449c-a995-85e3539d9014"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.842153 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sphkw\" (UniqueName: \"kubernetes.io/projected/aaf60cc3-36cd-449c-a995-85e3539d9014-kube-api-access-sphkw\") on node \"crc\" DevicePath \"\"" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.842200 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.842211 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf60cc3-36cd-449c-a995-85e3539d9014-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.880112 4865 scope.go:117] "RemoveContainer" containerID="c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.909837 4865 scope.go:117] "RemoveContainer" containerID="ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944" Jan 23 12:51:49 crc kubenswrapper[4865]: E0123 12:51:49.916128 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944\": container with ID starting with ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944 not found: ID does not exist" containerID="ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.916542 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944"} err="failed to get container status \"ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944\": rpc error: code = NotFound desc = could not find container \"ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944\": container with ID starting with ec489c74e5cd256ecd0cd4f86a4aece6d39e7dca891dae4237957b739f653944 not found: ID does not exist" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.916579 4865 scope.go:117] "RemoveContainer" containerID="09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299" Jan 23 12:51:49 crc kubenswrapper[4865]: E0123 12:51:49.917161 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299\": container with ID starting with 09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299 not found: ID does not exist" containerID="09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.917200 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299"} err="failed to get container status \"09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299\": rpc error: code = NotFound desc = could not find container \"09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299\": container with ID starting with 09c7a23bfb0f0d20fcc3bd7ff3c3ddf94348509718eeb0e21318ecd80134e299 not found: ID does not exist" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.917228 4865 scope.go:117] "RemoveContainer" containerID="c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d" Jan 23 12:51:49 crc kubenswrapper[4865]: E0123 12:51:49.918956 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d\": container with ID starting with c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d not found: ID does not exist" containerID="c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d" Jan 23 12:51:49 crc kubenswrapper[4865]: I0123 12:51:49.918989 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d"} err="failed to get container status \"c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d\": rpc error: code = NotFound desc = could not find container \"c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d\": container with ID starting with c16f80c02e96da29da41143c9d8ccc759d8b00893cf5213d96ab471fbbcefa4d not found: ID does not exist" Jan 23 12:51:50 crc kubenswrapper[4865]: I0123 12:51:50.022400 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fc4h8"] Jan 23 12:51:50 crc kubenswrapper[4865]: I0123 12:51:50.032474 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fc4h8"] Jan 23 12:51:50 crc kubenswrapper[4865]: I0123 12:51:50.128770 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf60cc3-36cd-449c-a995-85e3539d9014" path="/var/lib/kubelet/pods/aaf60cc3-36cd-449c-a995-85e3539d9014/volumes" Jan 23 12:51:52 crc kubenswrapper[4865]: I0123 12:51:52.958964 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d9nfz" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="registry-server" probeResult="failure" output=< Jan 23 12:51:52 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:51:52 crc kubenswrapper[4865]: > Jan 23 12:52:02 crc kubenswrapper[4865]: I0123 12:52:02.967847 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d9nfz" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="registry-server" probeResult="failure" output=< Jan 23 12:52:02 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:52:02 crc kubenswrapper[4865]: > Jan 23 12:52:11 crc kubenswrapper[4865]: I0123 12:52:11.973119 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:52:12 crc kubenswrapper[4865]: I0123 12:52:12.022423 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:52:12 crc kubenswrapper[4865]: I0123 12:52:12.688170 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d9nfz"] Jan 23 12:52:13 crc kubenswrapper[4865]: I0123 12:52:13.908165 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d9nfz" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="registry-server" containerID="cri-o://c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec" gracePeriod=2 Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.696676 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.755162 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nwpc\" (UniqueName: \"kubernetes.io/projected/f931330b-23a2-4304-b53f-0fd2a2fd53cb-kube-api-access-4nwpc\") pod \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.755222 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-utilities\") pod \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.755244 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-catalog-content\") pod \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\" (UID: \"f931330b-23a2-4304-b53f-0fd2a2fd53cb\") " Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.759466 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-utilities" (OuterVolumeSpecName: "utilities") pod "f931330b-23a2-4304-b53f-0fd2a2fd53cb" (UID: "f931330b-23a2-4304-b53f-0fd2a2fd53cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.768326 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f931330b-23a2-4304-b53f-0fd2a2fd53cb-kube-api-access-4nwpc" (OuterVolumeSpecName: "kube-api-access-4nwpc") pod "f931330b-23a2-4304-b53f-0fd2a2fd53cb" (UID: "f931330b-23a2-4304-b53f-0fd2a2fd53cb"). InnerVolumeSpecName "kube-api-access-4nwpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.858079 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nwpc\" (UniqueName: \"kubernetes.io/projected/f931330b-23a2-4304-b53f-0fd2a2fd53cb-kube-api-access-4nwpc\") on node \"crc\" DevicePath \"\"" Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.858116 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.876982 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f931330b-23a2-4304-b53f-0fd2a2fd53cb" (UID: "f931330b-23a2-4304-b53f-0fd2a2fd53cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.917769 4865 generic.go:334] "Generic (PLEG): container finished" podID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerID="c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec" exitCode=0 Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.917866 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9nfz" Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.917888 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nfz" event={"ID":"f931330b-23a2-4304-b53f-0fd2a2fd53cb","Type":"ContainerDied","Data":"c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec"} Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.918618 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nfz" event={"ID":"f931330b-23a2-4304-b53f-0fd2a2fd53cb","Type":"ContainerDied","Data":"c16330d67ff830af7e6de61e0c0f6e57e471281ffbac879ff72144cd36040fd5"} Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.918916 4865 scope.go:117] "RemoveContainer" containerID="c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec" Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.958368 4865 scope.go:117] "RemoveContainer" containerID="66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d" Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.959396 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f931330b-23a2-4304-b53f-0fd2a2fd53cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.965179 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d9nfz"] Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.971468 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d9nfz"] Jan 23 12:52:14 crc kubenswrapper[4865]: I0123 12:52:14.981300 4865 scope.go:117] "RemoveContainer" containerID="dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823" Jan 23 12:52:15 crc kubenswrapper[4865]: I0123 12:52:15.019070 4865 scope.go:117] "RemoveContainer" containerID="c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec" Jan 23 12:52:15 crc kubenswrapper[4865]: E0123 12:52:15.020646 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec\": container with ID starting with c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec not found: ID does not exist" containerID="c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec" Jan 23 12:52:15 crc kubenswrapper[4865]: I0123 12:52:15.020767 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec"} err="failed to get container status \"c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec\": rpc error: code = NotFound desc = could not find container \"c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec\": container with ID starting with c88ac88886a61cfb228f4caa5a6948b12393e141b4ef3f84f4e0ffbb8fd9f5ec not found: ID does not exist" Jan 23 12:52:15 crc kubenswrapper[4865]: I0123 12:52:15.020843 4865 scope.go:117] "RemoveContainer" containerID="66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d" Jan 23 12:52:15 crc kubenswrapper[4865]: E0123 12:52:15.021147 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d\": container with ID starting with 66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d not found: ID does not exist" containerID="66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d" Jan 23 12:52:15 crc kubenswrapper[4865]: I0123 12:52:15.021235 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d"} err="failed to get container status \"66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d\": rpc error: code = NotFound desc = could not find container \"66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d\": container with ID starting with 66544fa7a0eecdfd2925c16cd8bad1206aa17363e099fdfa61e75c57846a2f0d not found: ID does not exist" Jan 23 12:52:15 crc kubenswrapper[4865]: I0123 12:52:15.021298 4865 scope.go:117] "RemoveContainer" containerID="dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823" Jan 23 12:52:15 crc kubenswrapper[4865]: E0123 12:52:15.021615 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823\": container with ID starting with dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823 not found: ID does not exist" containerID="dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823" Jan 23 12:52:15 crc kubenswrapper[4865]: I0123 12:52:15.021687 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823"} err="failed to get container status \"dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823\": rpc error: code = NotFound desc = could not find container \"dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823\": container with ID starting with dcd4f32aca5dee11d675e60003f4ac4dc09137ed696e84bb734688dff9851823 not found: ID does not exist" Jan 23 12:52:16 crc kubenswrapper[4865]: I0123 12:52:16.135496 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" path="/var/lib/kubelet/pods/f931330b-23a2-4304-b53f-0fd2a2fd53cb/volumes" Jan 23 12:52:18 crc kubenswrapper[4865]: I0123 12:52:18.776475 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:52:18 crc kubenswrapper[4865]: I0123 12:52:18.777243 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:52:48 crc kubenswrapper[4865]: I0123 12:52:48.776535 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:52:48 crc kubenswrapper[4865]: I0123 12:52:48.777229 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:53:18 crc kubenswrapper[4865]: I0123 12:53:18.776170 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:53:18 crc kubenswrapper[4865]: I0123 12:53:18.776645 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:53:18 crc kubenswrapper[4865]: I0123 12:53:18.777080 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:53:18 crc kubenswrapper[4865]: I0123 12:53:18.778236 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"252afbbfcef2f8984487f9b8a509819ceebe38ec76baa0fba3638a26ddef44d8"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:53:18 crc kubenswrapper[4865]: I0123 12:53:18.778489 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://252afbbfcef2f8984487f9b8a509819ceebe38ec76baa0fba3638a26ddef44d8" gracePeriod=600 Jan 23 12:53:19 crc kubenswrapper[4865]: I0123 12:53:19.473327 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="252afbbfcef2f8984487f9b8a509819ceebe38ec76baa0fba3638a26ddef44d8" exitCode=0 Jan 23 12:53:19 crc kubenswrapper[4865]: I0123 12:53:19.473531 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"252afbbfcef2f8984487f9b8a509819ceebe38ec76baa0fba3638a26ddef44d8"} Jan 23 12:53:19 crc kubenswrapper[4865]: I0123 12:53:19.474343 4865 scope.go:117] "RemoveContainer" containerID="d0681afbfb396bc9551767dcdd480a6c8a67f004fbb468143e4180dd5a1d24f8" Jan 23 12:53:20 crc kubenswrapper[4865]: I0123 12:53:20.487049 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606"} Jan 23 12:54:39 crc kubenswrapper[4865]: I0123 12:54:39.898174 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.225846239s: [/var/lib/containers/storage/overlay/cc447ceb6efc13e01303f06952a1a5de18fedb2261df2d1f61e67b6e388edf91/diff /var/log/pods/openstack_openstack-galera-0_78884295-a3de-4e00-bcc4-6a1627b50717/galera/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:54:39 crc kubenswrapper[4865]: I0123 12:54:39.901103 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.229924459s: [/var/lib/containers/storage/overlay/060b61254131b3b6acc65cb327f02a2f6d081daacc8598bcfab0bc56a48d6eba/diff /var/log/pods/openstack_openstack-cell1-galera-0_5cf30925-0355-42db-9895-f23a97fca08e/galera/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:54:39 crc kubenswrapper[4865]: I0123 12:54:39.978993 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:54:40 crc kubenswrapper[4865]: I0123 12:54:40.002315 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:54:40 crc kubenswrapper[4865]: I0123 12:54:40.002389 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:54:40 crc kubenswrapper[4865]: I0123 12:54:39.979226 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:55:48 crc kubenswrapper[4865]: I0123 12:55:48.776867 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:55:48 crc kubenswrapper[4865]: I0123 12:55:48.778078 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:56:20 crc kubenswrapper[4865]: I0123 12:56:20.788828 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:56:20 crc kubenswrapper[4865]: I0123 12:56:20.795956 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:56:20 crc kubenswrapper[4865]: I0123 12:56:20.796011 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:56:20 crc kubenswrapper[4865]: I0123 12:56:20.813303 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:56:20 crc kubenswrapper[4865]: I0123 12:56:20.825908 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.137032746s: [/var/lib/containers/storage/overlay/2c8f981e5c863c3f9fcb5b14794bd33882856a7670df01826776f6eda6945d26/diff /var/log/pods/openstack_barbican-keystone-listener-969599b78-gpdqv_2396993e-12f6-41c3-9f09-b501bf6fb29b/barbican-keystone-listener/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:56:20 crc kubenswrapper[4865]: I0123 12:56:20.812686 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.683983095s: [/var/lib/containers/storage/overlay/17a5dbb812cce3ea6f19dd6f16fe3d40c5bcea400f8ebd08e394cff7389262af/diff /var/log/pods/openstack_nova-api-0_ad9795d9-23da-4c83-af4f-cd9ee93afd93/nova-api-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:56:20 crc kubenswrapper[4865]: I0123 12:56:20.800336 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:56:20 crc kubenswrapper[4865]: I0123 12:56:20.854328 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.027567401s: [/var/lib/containers/storage/overlay/e661e002ea2f85e94e61351ab4659e232fcf1cafee484f362dc4b8208bde0095/diff /var/log/pods/openstack_barbican-worker-98cdcc84f-cr2jf_cb23e044-f1b3-4114-94af-7aa272f670a0/barbican-worker/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:56:20 crc kubenswrapper[4865]: I0123 12:56:20.834958 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:56:48 crc kubenswrapper[4865]: I0123 12:56:48.777105 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 12:56:48 crc kubenswrapper[4865]: I0123 12:56:48.777679 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 12:56:50 crc kubenswrapper[4865]: I0123 12:56:50.360461 4865 trace.go:236] Trace[326760973]: "Calculate volume metrics of iptables-alerter-script for pod openshift-network-operator/iptables-alerter-4ln5h" (23-Jan-2026 12:56:49.217) (total time: 1141ms): Jan 23 12:56:50 crc kubenswrapper[4865]: Trace[326760973]: [1.141385659s] [1.141385659s] END Jan 23 12:56:50 crc kubenswrapper[4865]: I0123 12:56:50.362933 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.596038876s: [/var/lib/containers/storage/overlay/0ef51f5973d8fe9d7ded841ad3d43e45dcf36f8ac592c35c1d96204544149257/diff /var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-9fl7w_e92ddc14-bdb6-4407-b8a3-047079030166/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:56:50 crc kubenswrapper[4865]: I0123 12:56:50.388874 4865 trace.go:236] Trace[1849450866]: "Calculate volume metrics of test-operator-ephemeral-temporary for pod openstack/tempest-tests-tempest-s00-multi-thread-testing" (23-Jan-2026 12:56:48.657) (total time: 1731ms): Jan 23 12:56:50 crc kubenswrapper[4865]: Trace[1849450866]: [1.731771617s] [1.731771617s] END Jan 23 12:56:50 crc kubenswrapper[4865]: I0123 12:56:50.391753 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 12:56:50 crc kubenswrapper[4865]: I0123 12:56:50.399407 4865 trace.go:236] Trace[431678312]: "Calculate volume metrics of service-ca for pod openshift-cluster-version/cluster-version-operator-5c965bbfc6-x9z86" (23-Jan-2026 12:56:48.492) (total time: 1906ms): Jan 23 12:56:50 crc kubenswrapper[4865]: Trace[431678312]: [1.906644072s] [1.906644072s] END Jan 23 12:56:50 crc kubenswrapper[4865]: I0123 12:56:50.435961 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 12:56:50 crc kubenswrapper[4865]: I0123 12:56:50.438529 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" gracePeriod=600 Jan 23 12:56:50 crc kubenswrapper[4865]: I0123 12:56:50.567479 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.187333145s: [/var/lib/containers/storage/overlay/59d5580f54327ce4d3cb705b0da2ee45765a7a9570be239ae94fc3e1a68bf8ee/diff /var/log/pods/openstack_horizon-66f7b94cdb-f7pw2_98cc6a2c-601d-49ae-8d9c-da49869b3639/horizon/2.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:56:50 crc kubenswrapper[4865]: E0123 12:56:50.600039 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:56:51 crc kubenswrapper[4865]: I0123 12:56:51.457559 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" exitCode=0 Jan 23 12:56:51 crc kubenswrapper[4865]: I0123 12:56:51.457637 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606"} Jan 23 12:56:51 crc kubenswrapper[4865]: I0123 12:56:51.458328 4865 scope.go:117] "RemoveContainer" containerID="252afbbfcef2f8984487f9b8a509819ceebe38ec76baa0fba3638a26ddef44d8" Jan 23 12:56:51 crc kubenswrapper[4865]: I0123 12:56:51.458588 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:56:51 crc kubenswrapper[4865]: E0123 12:56:51.459071 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:57:06 crc kubenswrapper[4865]: I0123 12:57:06.124985 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:57:06 crc kubenswrapper[4865]: E0123 12:57:06.125815 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:57:21 crc kubenswrapper[4865]: I0123 12:57:21.117915 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:57:21 crc kubenswrapper[4865]: E0123 12:57:21.118627 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:57:29 crc kubenswrapper[4865]: I0123 12:57:29.460635 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:29 crc kubenswrapper[4865]: I0123 12:57:29.497829 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 12:57:29 crc kubenswrapper[4865]: I0123 12:57:29.507221 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:29 crc kubenswrapper[4865]: I0123 12:57:29.527689 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:29 crc kubenswrapper[4865]: I0123 12:57:29.528045 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:29 crc kubenswrapper[4865]: I0123 12:57:29.596109 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:29 crc kubenswrapper[4865]: I0123 12:57:29.596183 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:29 crc kubenswrapper[4865]: I0123 12:57:29.837998 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:57:30 crc kubenswrapper[4865]: I0123 12:57:30.769334 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:57:32 crc kubenswrapper[4865]: I0123 12:57:31.698818 4865 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-qtxv5 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:32 crc kubenswrapper[4865]: I0123 12:57:31.699468 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:34 crc kubenswrapper[4865]: I0123 12:57:34.822636 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:35 crc kubenswrapper[4865]: I0123 12:57:35.172543 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:57:35 crc kubenswrapper[4865]: E0123 12:57:35.174051 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.313565 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.504856 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.504939 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.504985 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.522978 4865 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.524011 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.634110 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.634119 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.634188 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.634219 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.675849 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.819374 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.823941 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.867210 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.950857 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.950861 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:37 crc kubenswrapper[4865]: I0123 12:57:37.992787 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:38 crc kubenswrapper[4865]: I0123 12:57:38.057811 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:38 crc kubenswrapper[4865]: I0123 12:57:38.057887 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:38 crc kubenswrapper[4865]: I0123 12:57:38.057962 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:38 crc kubenswrapper[4865]: I0123 12:57:38.057996 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:38 crc kubenswrapper[4865]: I0123 12:57:38.067392 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:57:38 crc kubenswrapper[4865]: I0123 12:57:38.267230 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:38 crc kubenswrapper[4865]: I0123 12:57:38.337813 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:38 crc kubenswrapper[4865]: I0123 12:57:38.806863 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:38 crc kubenswrapper[4865]: I0123 12:57:38.806867 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:39 crc kubenswrapper[4865]: I0123 12:57:39.073844 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:57:39 crc kubenswrapper[4865]: I0123 12:57:39.767941 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:39 crc kubenswrapper[4865]: I0123 12:57:39.768033 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:39 crc kubenswrapper[4865]: I0123 12:57:39.768107 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:39 crc kubenswrapper[4865]: I0123 12:57:39.768133 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:40 crc kubenswrapper[4865]: I0123 12:57:40.244880 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:40 crc kubenswrapper[4865]: I0123 12:57:40.245100 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:40 crc kubenswrapper[4865]: I0123 12:57:40.551224 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:40 crc kubenswrapper[4865]: I0123 12:57:40.551305 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:40.551720 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:40.551756 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:40.633013 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:41.489272 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:41.489382 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:41.630739 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:41.630779 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:41.657118 4865 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-qtxv5 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:41.657289 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:41.817208 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-8547q" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:41.817310 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:41.817400 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:57:41 crc kubenswrapper[4865]: I0123 12:57:41.820322 4865 trace.go:236] Trace[132761165]: "Calculate volume metrics of etc-kube for pod openshift-machine-config-operator/kube-rbac-proxy-crio-crc" (23-Jan-2026 12:57:40.646) (total time: 1172ms): Jan 23 12:57:41 crc kubenswrapper[4865]: Trace[132761165]: [1.172565448s] [1.172565448s] END Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.425218 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.720254 4865 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hrzcb container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.720328 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" podUID="c8896518-4b5b-4712-9994-0bb445a3504f" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.738816 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.738869 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.739259 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.739316 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.769400 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.769761 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.774893 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.775055 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.775162 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.775452 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.783761 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.899450 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.958722 4865 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:42 crc kubenswrapper[4865]: I0123 12:57:42.958792 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.091807 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.091883 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.183742 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.45:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.527906 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.527927 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.591953 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.592027 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.592958 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.633869 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.633953 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.633998 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.736821 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.736823 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.770869 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 12:57:43 crc kubenswrapper[4865]: I0123 12:57:43.771653 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 12:57:44 crc kubenswrapper[4865]: I0123 12:57:44.280742 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"eeb461b6cb630a97f0fbc5e12f059a8993d241deb81f696209187f4282c21944"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 23 12:57:44 crc kubenswrapper[4865]: I0123 12:57:44.281075 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" containerID="cri-o://eeb461b6cb630a97f0fbc5e12f059a8993d241deb81f696209187f4282c21944" gracePeriod=30 Jan 23 12:57:44 crc kubenswrapper[4865]: I0123 12:57:44.421041 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:44 crc kubenswrapper[4865]: I0123 12:57:44.421466 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:44 crc kubenswrapper[4865]: I0123 12:57:44.503878 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:44 crc kubenswrapper[4865]: I0123 12:57:44.503881 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:44 crc kubenswrapper[4865]: I0123 12:57:44.504029 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:44 crc kubenswrapper[4865]: I0123 12:57:44.503954 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:44 crc kubenswrapper[4865]: I0123 12:57:44.597822 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:44 crc kubenswrapper[4865]: I0123 12:57:44.597905 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.060824 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.060874 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.061182 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.060883 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.061231 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.060903 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.060918 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.061312 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.061269 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.071762 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.071798 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.071860 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.071903 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.071822 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.071941 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.071959 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072002 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072043 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072069 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072071 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072117 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072138 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072136 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072153 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072156 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072133 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.072177 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.248939 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.249026 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.248945 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.249156 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.477722 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.512328 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.512381 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.512443 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.512513 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.550429 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.550497 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.776809 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.776896 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.845042 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.885810 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.895745 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:45 crc kubenswrapper[4865]: I0123 12:57:45.895817 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:46 crc kubenswrapper[4865]: I0123 12:57:46.219428 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f83b3f2b-567e-4afe-9797-db1aa2bdadaa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:46 crc kubenswrapper[4865]: I0123 12:57:46.260317 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f83b3f2b-567e-4afe-9797-db1aa2bdadaa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:46 crc kubenswrapper[4865]: I0123 12:57:46.674049 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.33:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:46 crc kubenswrapper[4865]: I0123 12:57:46.674255 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" Jan 23 12:57:46 crc kubenswrapper[4865]: I0123 12:57:46.676194 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="hostpath-provisioner" containerStatusID={"Type":"cri-o","ID":"a61639e591677ff6558a3b162b8b0f56384d87f66c1768087f94fff3e4308e0f"} pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" containerMessage="Container hostpath-provisioner failed liveness probe, will be restarted" Jan 23 12:57:46 crc kubenswrapper[4865]: I0123 12:57:46.677139 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerName="hostpath-provisioner" containerID="cri-o://a61639e591677ff6558a3b162b8b0f56384d87f66c1768087f94fff3e4308e0f" gracePeriod=30 Jan 23 12:57:46 crc kubenswrapper[4865]: I0123 12:57:46.769117 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-8547q" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 23 12:57:46 crc kubenswrapper[4865]: I0123 12:57:46.776901 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:46 crc kubenswrapper[4865]: I0123 12:57:46.777858 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.381941 4865 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-sgsqx container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.382024 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.422776 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.422833 4865 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-sgsqx container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.422872 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.423071 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.504866 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.505273 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.587770 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.587869 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.56:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.770464 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.770476 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.770539 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ovn-northd-0" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.770652 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.770587 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" containerName="ovnkube-controller" probeResult="failure" output="command timed out" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.774183 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.774205 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.774340 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.774588 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.779461 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ovn-northd" containerStatusID={"Type":"cri-o","ID":"1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1"} pod="openstack/ovn-northd-0" containerMessage="Container ovn-northd failed liveness probe, will be restarted" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.781775 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" containerID="cri-o://1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" gracePeriod=30 Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.793776 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.793882 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.794322 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.794375 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.794462 4865 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.794546 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.794561 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.56:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.876751 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.876757 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.876812 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.959800 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:47 crc kubenswrapper[4865]: I0123 12:57:47.959801 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.043807 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.043811 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.128089 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.214140 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.214201 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.377804 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.378041 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.460049 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.545910 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.545992 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.546026 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.546042 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.546066 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.546079 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.546104 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.546469 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.546571 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.547078 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"f7db30f04928c52c6d1185acbe8a775b6211677b2574d48e1b3cd288e7764e52"} pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" containerMessage="Container controller-manager failed liveness probe, will be restarted" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.547124 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" containerID="cri-o://f7db30f04928c52c6d1185acbe8a775b6211677b2574d48e1b3cd288e7764e52" gracePeriod=30 Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.547088 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.628839 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.628886 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.771445 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" probeResult="failure" output="command timed out" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.774686 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.775272 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.775644 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.775936 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.779470 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.781663 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.793838 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.793876 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.793979 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.793999 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.794364 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.875803 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.875856 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.875892 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.875814 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.916980 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.916975 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.917107 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.958838 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.958849 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.959137 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:57:48 crc kubenswrapper[4865]: I0123 12:57:48.959192 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:57:49 crc kubenswrapper[4865]: I0123 12:57:49.120914 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:57:49 crc kubenswrapper[4865]: E0123 12:57:49.122127 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:57:49 crc kubenswrapper[4865]: I0123 12:57:49.438141 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:49 crc kubenswrapper[4865]: I0123 12:57:49.438138 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:49 crc kubenswrapper[4865]: I0123 12:57:49.726893 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:49 crc kubenswrapper[4865]: I0123 12:57:49.727037 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:49 crc kubenswrapper[4865]: I0123 12:57:49.768863 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:49 crc kubenswrapper[4865]: I0123 12:57:49.768935 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:49 crc kubenswrapper[4865]: I0123 12:57:49.835902 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:49 crc kubenswrapper[4865]: I0123 12:57:49.959900 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:50 crc kubenswrapper[4865]: I0123 12:57:50.202784 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:50 crc kubenswrapper[4865]: I0123 12:57:50.489663 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.39:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:50 crc kubenswrapper[4865]: I0123 12:57:50.490026 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:50 crc kubenswrapper[4865]: I0123 12:57:50.490079 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:57:50 crc kubenswrapper[4865]: I0123 12:57:50.490839 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"02538d47a4f7198d06ac45cdff31ecba4f646e402e14499af85a0e57e26dbec9"} pod="openshift-console/console-5d7d54b946-29gbz" containerMessage="Container console failed liveness probe, will be restarted" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.374996 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-858654f9db-mbdcq" podUID="a332d40d-1d78-4d9d-b768-b988654c732a" containerName="cert-manager-controller" probeResult="failure" output="Get \"http://10.217.0.73:9403/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:51 crc kubenswrapper[4865]: E0123 12:57:51.441933 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 12:57:51 crc kubenswrapper[4865]: E0123 12:57:51.452013 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 12:57:51 crc kubenswrapper[4865]: E0123 12:57:51.453707 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 12:57:51 crc kubenswrapper[4865]: E0123 12:57:51.453768 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.488623 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.488691 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.551071 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.551154 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.551595 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.629139 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.629164 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.656928 4865 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-qtxv5 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.656997 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.657076 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.779896 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" containerName="nbdb" probeResult="failure" output="command timed out" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.779895 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.780045 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.780776 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" containerName="sbdb" probeResult="failure" output="command timed out" Jan 23 12:57:51 crc kubenswrapper[4865]: I0123 12:57:51.782859 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.181736 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.388755 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.388820 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.657774 4865 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-qtxv5 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.658554 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.678741 4865 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hrzcb container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.678963 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" podUID="c8896518-4b5b-4712-9994-0bb445a3504f" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.737646 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.737709 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.768520 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.768665 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.769641 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-8547q" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.769775 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.770510 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.737714 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.777737 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.778352 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.794330 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 23 12:57:52 crc kubenswrapper[4865]: I0123 12:57:52.794724 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:52.958097 4865 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:52.958155 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.051911 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.183807 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.45:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.463304 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.463381 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.528832 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.738304 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.738519 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.774081 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.774171 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.777528 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"0eaf574272189462b7fced379e7c324bec419a8b14cb558060d9466a414d00db"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.778504 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" containerID="cri-o://0eaf574272189462b7fced379e7c324bec419a8b14cb558060d9466a414d00db" gracePeriod=30 Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.960917 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:53 crc kubenswrapper[4865]: I0123 12:57:53.961401 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.227767 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.227823 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.228113 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.228137 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.228300 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.228323 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.228473 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.228498 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.423012 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.423565 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.495053 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c82fe7f9-37d7-4874-9b2d-ba437546562f/ovn-northd/0.log" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.495215 4865 generic.go:334] "Generic (PLEG): container finished" podID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerID="1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" exitCode=139 Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.495361 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c82fe7f9-37d7-4874-9b2d-ba437546562f","Type":"ContainerDied","Data":"1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1"} Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.513827 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.513880 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.513914 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.513935 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.550769 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.550826 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.744128 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.854773 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:54 crc kubenswrapper[4865]: I0123 12:57:54.854854 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.061975 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.062040 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.062347 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.102831 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.102894 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.102844 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.102946 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.102979 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103018 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103125 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103151 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103219 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103245 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103263 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103283 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103297 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103319 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103321 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103350 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103489 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103536 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103586 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103735 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103763 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103775 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.103733 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.105052 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"ec12bf4bdacfcf6c78e1359d4719470ee7eb48b8c1ace723ab75295cc13aa70d"} pod="metallb-system/frr-k8s-gh89m" containerMessage="Container frr failed liveness probe, will be restarted" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.105188 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="frr" containerID="cri-o://ec12bf4bdacfcf6c78e1359d4719470ee7eb48b8c1ace723ab75295cc13aa70d" gracePeriod=2 Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.249901 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.249957 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.250002 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.250014 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.512199 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.512283 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.512302 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.512359 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: E0123 12:57:55.695156 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.733704 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.733818 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.775179 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.775307 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.775348 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.775520 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.816505 4865 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-thkng container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.816873 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" podUID="0cab2dc0-42b2-4029-8388-b20c287698bc" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:55 crc kubenswrapper[4865]: I0123 12:57:55.885781 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:56 crc kubenswrapper[4865]: I0123 12:57:55.885846 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:56 crc kubenswrapper[4865]: I0123 12:57:55.893828 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:56 crc kubenswrapper[4865]: I0123 12:57:55.893830 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:56 crc kubenswrapper[4865]: I0123 12:57:56.260763 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f83b3f2b-567e-4afe-9797-db1aa2bdadaa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:56 crc kubenswrapper[4865]: I0123 12:57:56.261712 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f83b3f2b-567e-4afe-9797-db1aa2bdadaa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:56 crc kubenswrapper[4865]: E0123 12:57:56.355079 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1 is running failed: container process not found" containerID="1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 12:57:56 crc kubenswrapper[4865]: E0123 12:57:56.355790 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1 is running failed: container process not found" containerID="1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 12:57:56 crc kubenswrapper[4865]: E0123 12:57:56.356273 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1 is running failed: container process not found" containerID="1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 12:57:56 crc kubenswrapper[4865]: E0123 12:57:56.356324 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" Jan 23 12:57:56 crc kubenswrapper[4865]: I0123 12:57:56.986809 4865 patch_prober.go:28] interesting pod/apiserver-76f77b778f-r8fk2 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:56 crc kubenswrapper[4865]: I0123 12:57:56.986876 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" podUID="51f498e1-f13f-4977-a3e3-ea8bc6b75c6f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:57 crc kubenswrapper[4865]: I0123 12:57:57.299823 4865 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-sgsqx container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:57 crc kubenswrapper[4865]: I0123 12:57:57.299827 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:57 crc kubenswrapper[4865]: I0123 12:57:57.299985 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:57 crc kubenswrapper[4865]: I0123 12:57:57.299891 4865 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-sgsqx container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:57 crc kubenswrapper[4865]: I0123 12:57:57.300544 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:57 crc kubenswrapper[4865]: I0123 12:57:57.300590 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:57 crc kubenswrapper[4865]: I0123 12:57:57.340913 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:57 crc kubenswrapper[4865]: I0123 12:57:57.341037 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.465808 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.56:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.547857 4865 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.547887 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.547955 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.547915 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.547981 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.548008 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.547997 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.548094 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.548107 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.551567 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.551663 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.660862 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.660987 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.701486 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.770161 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-8547q" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.771591 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.772486 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.826874 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.826899 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.908837 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.908978 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.908868 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.909223 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:57.949806 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.031949 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.032135 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.032217 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.056398 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.056442 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.337820 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.337844 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.419767 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.419808 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.576546 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cert-manager-webhook" containerStatusID={"Type":"cri-o","ID":"32adfc53532b9a4da0fc696be93013a0d5ed9468ca28f5ee3ea470e50ce0b017"} pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" containerMessage="Container cert-manager-webhook failed liveness probe, will be restarted" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.576641 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" containerID="cri-o://32adfc53532b9a4da0fc696be93013a0d5ed9468ca28f5ee3ea470e50ce0b017" gracePeriod=30 Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.610482 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.159829024s: [/var/lib/containers/storage/overlay/78e176101d4dd2ce19604b13b40384b242d94f6777b98e7383ce9088221a7651/diff /var/log/pods/openshift-apiserver_apiserver-76f77b778f-r8fk2_51f498e1-f13f-4977-a3e3-ea8bc6b75c6f/openshift-apiserver-check-endpoints/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.611294 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.148258178s: [/var/lib/containers/storage/overlay/76f877dca38a7afa4efdef83ee52f8a7c9d6dfbec72b0c2a757112b1af2ea22b/diff /var/log/pods/openshift-image-registry_cluster-image-registry-operator-dc59b4c8b-vfc5n_b7e3b68e-b5c0-4446-9d59-39be6a478326/cluster-image-registry-operator/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.611324 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.611428 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1" gracePeriod=30 Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.622062 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.622674 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.632510 4865 generic.go:334] "Generic (PLEG): container finished" podID="9faffae5-73bb-4980-8092-b79a6888476d" containerID="ec12bf4bdacfcf6c78e1359d4719470ee7eb48b8c1ace723ab75295cc13aa70d" exitCode=143 Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.632550 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerDied","Data":"ec12bf4bdacfcf6c78e1359d4719470ee7eb48b8c1ace723ab75295cc13aa70d"} Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.632796 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.632807 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.701830 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.773990 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.774052 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.773990 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.774148 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.774247 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.774272 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.774362 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.774565 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.774644 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.774820 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.804801 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.804802 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.991789 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:58 crc kubenswrapper[4865]: I0123 12:57:58.991879 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.116740 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.438864 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.438859 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.704899 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.705058 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.788010 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.788009 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.788138 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.788070 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.788241 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.788266 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.789500 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945"} pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Jan 23 12:57:59 crc kubenswrapper[4865]: I0123 12:57:59.789547 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" containerID="cri-o://9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945" gracePeriod=30 Jan 23 12:58:00 crc kubenswrapper[4865]: I0123 12:58:00.244809 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:00 crc kubenswrapper[4865]: I0123 12:58:00.244809 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:00 crc kubenswrapper[4865]: I0123 12:58:00.244939 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:58:00 crc kubenswrapper[4865]: I0123 12:58:00.569172 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:00 crc kubenswrapper[4865]: I0123 12:58:00.569536 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:00 crc kubenswrapper[4865]: I0123 12:58:00.830817 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:00 crc kubenswrapper[4865]: I0123 12:58:00.830890 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.286929 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.368873 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f83b3f2b-567e-4afe-9797-db1aa2bdadaa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.368936 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f83b3f2b-567e-4afe-9797-db1aa2bdadaa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:01 crc kubenswrapper[4865]: E0123 12:58:01.382666 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1 is running failed: container process not found" containerID="1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 12:58:01 crc kubenswrapper[4865]: E0123 12:58:01.383572 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1 is running failed: container process not found" containerID="1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 12:58:01 crc kubenswrapper[4865]: E0123 12:58:01.384330 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1 is running failed: container process not found" containerID="1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 23 12:58:01 crc kubenswrapper[4865]: E0123 12:58:01.384400 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1c624309c41d7e07a20756d666f03e3eb6635f1e6cb23f4a4e2ac07b6fc1c2b1 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" containerName="ovn-northd" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.489111 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.489191 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.489300 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.629416 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.629510 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.629645 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.629889 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.630507 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.630566 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" containerID="cri-o://43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5" gracePeriod=30 Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.659456 4865 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-qtxv5 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.659548 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.769903 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.770044 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.770938 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" containerName="nbdb" probeResult="failure" output="command timed out" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.771110 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-8547q" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.771193 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.771232 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" containerName="sbdb" probeResult="failure" output="command timed out" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.771261 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.771340 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.771343 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.771399 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.772503 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.778910 4865 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-thkng container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.779010 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" podUID="0cab2dc0-42b2-4029-8388-b20c287698bc" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.957445 4865 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.957497 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.957579 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.984499 4865 patch_prober.go:28] interesting pod/apiserver-76f77b778f-r8fk2 container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:01 crc kubenswrapper[4865]: I0123 12:58:01.984573 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" podUID="51f498e1-f13f-4977-a3e3-ea8bc6b75c6f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.672592 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.679792 4865 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hrzcb container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.679864 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" podUID="c8896518-4b5b-4712-9994-0bb445a3504f" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.679913 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.680856 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"9e2d67f5b624196c2ca7a39eb784d04a4289c7a162f39ffd49470ee7ed4b98ed"} pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.680888 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" podUID="c8896518-4b5b-4712-9994-0bb445a3504f" containerName="authentication-operator" containerID="cri-o://9e2d67f5b624196c2ca7a39eb784d04a4289c7a162f39ffd49470ee7ed4b98ed" gracePeriod=30 Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.737219 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.737314 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.737392 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.737879 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.737960 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.738009 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.738858 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"45ea759c1c5e5541e38c656a91725ebb01f67b53b71f6d6ca75e869cf22a64ba"} pod="openshift-console-operator/console-operator-58897d9998-8lsbn" containerMessage="Container console-operator failed liveness probe, will be restarted" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.738910 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" containerID="cri-o://45ea759c1c5e5541e38c656a91725ebb01f67b53b71f6d6ca75e869cf22a64ba" gracePeriod=30 Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.769767 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:58:02 crc kubenswrapper[4865]: I0123 12:58:02.770154 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:58:02 crc kubenswrapper[4865]: E0123 12:58:02.794858 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T12:57:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T12:57:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T12:57:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T12:57:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.091857 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.091962 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.091958 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.185022 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.45:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.185252 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.463799 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.463903 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.578847 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.578981 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.578862 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.579313 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.579378 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.736808 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.736897 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.736800 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.737188 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.738481 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.738532 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.770012 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.896835 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"d3d1a0d7a2dfbb419198472561c8b84f95b853d7374fc21ee4c10bfa5a6a34a1"} pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" containerMessage="Container webhook-server failed liveness probe, will be restarted" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.897136 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerName="webhook-server" containerID="cri-o://d3d1a0d7a2dfbb419198472561c8b84f95b853d7374fc21ee4c10bfa5a6a34a1" gracePeriod=2 Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.928413 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.928408 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.928658 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hz4vm" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.928685 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ovn-controller-hz4vm" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.929934 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ovn-controller" containerStatusID={"Type":"cri-o","ID":"fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8"} pod="openstack/ovn-controller-hz4vm" containerMessage="Container ovn-controller failed liveness probe, will be restarted" Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.934337 4865 generic.go:334] "Generic (PLEG): container finished" podID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerID="43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5" exitCode=2 Jan 23 12:58:03 crc kubenswrapper[4865]: I0123 12:58:03.934506 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cb0a89a-49f9-4a31-9cec-669e88882018","Type":"ContainerDied","Data":"43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5"} Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.120673 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:58:04 crc kubenswrapper[4865]: E0123 12:58:04.121324 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.134786 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.226837 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.45:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.228487 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.228522 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.228732 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.228790 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.420834 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.421570 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.422716 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"16c3ee308f4ea038e0db292673884d80dda1fbd5964d94cb29ddd5c2ddaa1043"} pod="metallb-system/controller-6968d8fdc4-8bjkz" containerMessage="Container controller failed liveness probe, will be restarted" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.422851 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerName="controller" containerID="cri-o://16c3ee308f4ea038e0db292673884d80dda1fbd5964d94cb29ddd5c2ddaa1043" gracePeriod=2 Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.462026 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.462238 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.462312 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="00b1558f-6054-43bb-82a7-329436ce1a0b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.175:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.546530 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.546716 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.546789 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.546557 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.546939 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.547168 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.559613 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"1420969c42f470d4a513235aeea7c05ddcfab5fb6d197c86b4cfc87c977c6dc8"} pod="openshift-console/downloads-7954f5f757-48b72" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.559684 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" containerID="cri-o://1420969c42f470d4a513235aeea7c05ddcfab5fb6d197c86b4cfc87c977c6dc8" gracePeriod=2 Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.619802 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.775006 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.775136 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.775165 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.776055 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ovsdb-server" containerStatusID={"Type":"cri-o","ID":"bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c"} pod="openstack/ovn-controller-ovs-spv64" containerMessage="Container ovsdb-server failed liveness probe, will be restarted" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.778080 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.779814 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.944068 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.958622 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c82fe7f9-37d7-4874-9b2d-ba437546562f/ovn-northd/0.log" Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.958767 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c82fe7f9-37d7-4874-9b2d-ba437546562f","Type":"ContainerStarted","Data":"eb58423206c258b75de2d7ed409ef8fab5b1b103e35da73686f76b743faf21c9"} Jan 23 12:58:04 crc kubenswrapper[4865]: I0123 12:58:04.964419 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"36967ff6eb84771c5b379b8fbd5102da53deca1bb3dbfbe97dbb9df7b6a7ef62"} Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.027818 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.027908 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.027964 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.027969 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028002 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.027984 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028018 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.027928 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028093 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028107 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028118 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028174 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028212 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028256 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028284 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028382 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028396 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028226 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028436 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028448 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028468 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028480 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028505 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.028570 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029039 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029087 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029105 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029134 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029154 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029411 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" containerMessage="Container olm-operator failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029455 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" containerID="cri-o://adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064" gracePeriod=30 Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029818 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029837 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029859 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029877 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.029895 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030111 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"811113bc6d4b48797e3257d6ecb031d47d0fdf3124b5f86b193d7c7a21255914"} pod="openshift-ingress/router-default-5444994796-swk7h" containerMessage="Container router failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030154 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" containerID="cri-o://811113bc6d4b48797e3257d6ecb031d47d0fdf3124b5f86b193d7c7a21255914" gracePeriod=10 Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030215 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030239 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030248 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030516 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030549 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" containerID="cri-o://574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc" gracePeriod=30 Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030630 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"295f055b70b23f536bbb0d34672057382274d320b9f85b221c28b54f85445626"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030671 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" containerID="cri-o://295f055b70b23f536bbb0d34672057382274d320b9f85b221c28b54f85445626" gracePeriod=30 Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030797 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389"} pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.030824 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" containerID="cri-o://54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389" gracePeriod=10 Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.031290 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.031819 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82"} pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.250826 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.251116 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.251191 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.266333 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.266387 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.266437 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.374896 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="00b1558f-6054-43bb-82a7-329436ce1a0b" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.175:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.512366 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.512401 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.512432 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.512460 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.512487 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.513653 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"fe8e5fdd26caa016dbb63f464761d687129188d8bc9524e4503f2cdbb1d13171"} pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.513696 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" containerID="cri-o://fe8e5fdd26caa016dbb63f464761d687129188d8bc9524e4503f2cdbb1d13171" gracePeriod=30 Jan 23 12:58:05 crc kubenswrapper[4865]: E0123 12:58:05.699426 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.734998 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.735159 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.773183 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.773332 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.773469 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.773545 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.773551 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.773584 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.774572 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff"} pod="openshift-marketplace/redhat-marketplace-nhd4g" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.774611 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec"} pod="openshift-marketplace/redhat-operators-tqvjg" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.774630 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" containerID="cri-o://d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" gracePeriod=30 Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.774656 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" containerID="cri-o://68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" gracePeriod=30 Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.775125 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.775182 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.808860 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" podUID="ccb11fa4-50bf-4e12-a5fa-782c911e6955" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.174:8080/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.808867 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7cc6994f4f-2qmtv" podUID="ccb11fa4-50bf-4e12-a5fa-782c911e6955" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.174:8080/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.817475 4865 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-thkng container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.817531 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-thkng" podUID="0cab2dc0-42b2-4029-8388-b20c287698bc" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.849918 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.850033 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-szb9h" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.850116 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.850568 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-szb9h" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.861936 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"aabab20e981a150a139733adbff53f4aa6231b20440051727cbd214182f0f5a1"} pod="metallb-system/speaker-szb9h" containerMessage="Container speaker failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.862047 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" containerID="cri-o://aabab20e981a150a139733adbff53f4aa6231b20440051727cbd214182f0f5a1" gracePeriod=2 Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.972872 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.973310 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"d6acb080922eeaa2369550b183fe958e11a57932e047f282f28c4fa5f378419b"} pod="metallb-system/frr-k8s-gh89m" containerMessage="Container controller failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.973452 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" containerID="cri-o://d6acb080922eeaa2369550b183fe958e11a57932e047f282f28c4fa5f378419b" gracePeriod=2 Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.973824 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Jan 23 12:58:05 crc kubenswrapper[4865]: I0123 12:58:05.973853 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" containerID="cri-o://23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450" gracePeriod=30 Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.041842 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.041903 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.041857 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.041857 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75dd7565cd-4skz5" podUID="80922e66-3668-4bf5-8bdf-ce6c9621fcd5" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.166:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.042036 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.042069 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.042105 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.042121 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.083780 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.083777 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.083955 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.294078 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.294458 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.418229 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-858654f9db-mbdcq" podUID="a332d40d-1d78-4d9d-b768-b988654c732a" containerName="cert-manager-controller" probeResult="failure" output="Get \"http://10.217.0.73:9403/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.418985 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/healthz\": dial tcp 10.217.0.74:6080: connect: connection refused" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.560894 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.560951 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.629910 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8081/readyz\": dial tcp 10.217.0.222:8081: connect: connection refused" Jan 23 12:58:06 crc kubenswrapper[4865]: E0123 12:58:06.716634 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3685d2b2_151b_479a_92c1_ae400eacd1b9.slice/crio-16c3ee308f4ea038e0db292673884d80dda1fbd5964d94cb29ddd5c2ddaa1043.scope\": RecentStats: unable to find data in memory cache]" Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.984092 4865 generic.go:334] "Generic (PLEG): container finished" podID="c63db198-8ec8-42b1-8211-d207c172706c" containerID="0eaf574272189462b7fced379e7c324bec419a8b14cb558060d9466a414d00db" exitCode=0 Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.984161 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerDied","Data":"0eaf574272189462b7fced379e7c324bec419a8b14cb558060d9466a414d00db"} Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.987967 4865 patch_prober.go:28] interesting pod/apiserver-76f77b778f-r8fk2 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:06 crc kubenswrapper[4865]: I0123 12:58:06.988123 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-r8fk2" podUID="51f498e1-f13f-4977-a3e3-ea8bc6b75c6f" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.014811 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.287073 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hz4vm" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.384941 4865 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-sgsqx container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.385032 4865 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-sgsqx container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.385026 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.385066 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.385191 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.385220 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.385966 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.386010 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.468839 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.509954 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.510048 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.510152 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.510281 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.591868 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.591897 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.56:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.591998 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.592040 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.673840 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.714828 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.756110 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.756274 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.756466 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.56:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.770969 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-8547q" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.771095 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.838878 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.839436 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.839574 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.946849 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:07 crc kubenswrapper[4865]: I0123 12:58:07.947016 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.015095 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" event={"ID":"9177b0d0-3ce7-40fe-8567-85cb8dd5227a","Type":"ContainerDied","Data":"d3d1a0d7a2dfbb419198472561c8b84f95b853d7374fc21ee4c10bfa5a6a34a1"} Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.015039 4865 generic.go:334] "Generic (PLEG): container finished" podID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerID="d3d1a0d7a2dfbb419198472561c8b84f95b853d7374fc21ee4c10bfa5a6a34a1" exitCode=137 Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.018050 4865 generic.go:334] "Generic (PLEG): container finished" podID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerID="32adfc53532b9a4da0fc696be93013a0d5ed9468ca28f5ee3ea470e50ce0b017" exitCode=0 Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.018130 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" event={"ID":"1405b73d-070d-495e-a80d-46fc2505ff8c","Type":"ContainerDied","Data":"32adfc53532b9a4da0fc696be93013a0d5ed9468ca28f5ee3ea470e50ce0b017"} Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.018851 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry" containerStatusID={"Type":"cri-o","ID":"a82cb642866d620e7e8da4c34411e1a4054fd3eb6ccb5d984ad3c250d3945b97"} pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" containerMessage="Container registry failed liveness probe, will be restarted" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.030931 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.112804 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.112810 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.161817 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8547q" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.359815 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.359908 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.484086 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.484322 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.484366 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.484438 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.484436 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.484683 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.484739 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.484772 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.484819 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.484939 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.567495 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.567567 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.738827 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.738916 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.739173 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.739421 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.739475 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.739539 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.772897 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.772961 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.772992 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.773054 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.773615 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.773711 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.774020 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792"} pod="openstack-operators/openstack-operator-index-hzwqc" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.774056 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" containerID="cri-o://13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" gracePeriod=30 Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.775892 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.775894 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.775969 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.775974 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.775992 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.776020 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.776236 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.776970 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1"} pod="openshift-marketplace/community-operators-hh6cp" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.777012 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" containerID="cri-o://578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" gracePeriod=30 Jan 23 12:58:08 crc kubenswrapper[4865]: E0123 12:58:08.780879 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.780979 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.781012 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: E0123 12:58:08.782245 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:08 crc kubenswrapper[4865]: E0123 12:58:08.783445 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:08 crc kubenswrapper[4865]: E0123 12:58:08.783483 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.821902 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.822005 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.947835 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.947846 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.56:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.947888 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.948336 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.948419 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:58:08 crc kubenswrapper[4865]: I0123 12:58:08.988866 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.52:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.031843 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.031870 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.032130 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.073847 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.271535 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-7954f5f757-48b72_bdee5ba9-99e1-495c-9b52-f670cbbffea2/download-server/0.log" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.271639 4865 generic.go:334] "Generic (PLEG): container finished" podID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerID="1420969c42f470d4a513235aeea7c05ddcfab5fb6d197c86b4cfc87c977c6dc8" exitCode=137 Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.271765 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-48b72" event={"ID":"bdee5ba9-99e1-495c-9b52-f670cbbffea2","Type":"ContainerDied","Data":"1420969c42f470d4a513235aeea7c05ddcfab5fb6d197c86b4cfc87c977c6dc8"} Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.275943 4865 generic.go:334] "Generic (PLEG): container finished" podID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerID="16c3ee308f4ea038e0db292673884d80dda1fbd5964d94cb29ddd5c2ddaa1043" exitCode=137 Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.276434 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630"} pod="openshift-marketplace/certified-operators-qwxxg" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.276474 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" containerID="cri-o://4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" gracePeriod=30 Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.276518 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerDied","Data":"16c3ee308f4ea038e0db292673884d80dda1fbd5964d94cb29ddd5c2ddaa1043"} Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.437937 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.437971 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.438066 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.438110 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.439048 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"5742c7fb47488dc050a829f7c69eb88fe730402225438befdb4f7b95a364495a"} pod="openstack/horizon-66f7b94cdb-f7pw2" containerMessage="Container horizon failed liveness probe, will be restarted" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.439087 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" containerID="cri-o://5742c7fb47488dc050a829f7c69eb88fe730402225438befdb4f7b95a364495a" gracePeriod=30 Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.568806 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.568875 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.568987 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.570140 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.570193 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.728870 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.729202 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.780791 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.864767 4865 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.864893 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:09 crc kubenswrapper[4865]: I0123 12:58:09.949802 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.203783 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.287099 4865 generic.go:334] "Generic (PLEG): container finished" podID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerID="295f055b70b23f536bbb0d34672057382274d320b9f85b221c28b54f85445626" exitCode=0 Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.287478 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" event={"ID":"2699af1d-57a0-4ce2-9550-b423f9eafc0f","Type":"ContainerDied","Data":"295f055b70b23f536bbb0d34672057382274d320b9f85b221c28b54f85445626"} Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.290732 4865 generic.go:334] "Generic (PLEG): container finished" podID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerID="aabab20e981a150a139733adbff53f4aa6231b20440051727cbd214182f0f5a1" exitCode=137 Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.290778 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-szb9h" event={"ID":"3dee20a9-c14d-4a42-afb1-87d126996c56","Type":"ContainerDied","Data":"aabab20e981a150a139733adbff53f4aa6231b20440051727cbd214182f0f5a1"} Jan 23 12:58:10 crc kubenswrapper[4865]: E0123 12:58:10.459901 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:10 crc kubenswrapper[4865]: E0123 12:58:10.461538 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:10 crc kubenswrapper[4865]: E0123 12:58:10.465205 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:10 crc kubenswrapper[4865]: E0123 12:58:10.465280 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.482408 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" start-of-body= Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.482469 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.482503 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:37132->192.168.126.11:10257: read: connection reset by peer" start-of-body= Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.482553 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.482580 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:37132->192.168.126.11:10257: read: connection reset by peer" Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.482652 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.483861 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"716198fba845e9e3bc3c1765621977f292a6e8ecb4f116f58a73e19b9cb9cabf"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed liveness probe, will be restarted" Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.483968 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://716198fba845e9e3bc3c1765621977f292a6e8ecb4f116f58a73e19b9cb9cabf" gracePeriod=30 Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.503959 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:37136->192.168.126.11:10257: read: connection reset by peer" start-of-body= Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.504041 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": read tcp 192.168.126.11:37136->192.168.126.11:10257: read: connection reset by peer" Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.768777 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:58:10 crc kubenswrapper[4865]: I0123 12:58:10.772747 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:10 crc kubenswrapper[4865]: E0123 12:58:10.774714 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:10 crc kubenswrapper[4865]: E0123 12:58:10.776247 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:10 crc kubenswrapper[4865]: E0123 12:58:10.777583 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:10 crc kubenswrapper[4865]: E0123 12:58:10.777634 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" Jan 23 12:58:11 crc kubenswrapper[4865]: I0123 12:58:11.259853 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f83b3f2b-567e-4afe-9797-db1aa2bdadaa" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:11 crc kubenswrapper[4865]: I0123 12:58:11.260200 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="f83b3f2b-567e-4afe-9797-db1aa2bdadaa" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.219:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:11 crc kubenswrapper[4865]: I0123 12:58:11.303951 4865 generic.go:334] "Generic (PLEG): container finished" podID="843c383b-053f-42f5-88ce-7a216f5354a3" containerID="574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc" exitCode=0 Jan 23 12:58:11 crc kubenswrapper[4865]: I0123 12:58:11.304054 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" event={"ID":"843c383b-053f-42f5-88ce-7a216f5354a3","Type":"ContainerDied","Data":"574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc"} Jan 23 12:58:11 crc kubenswrapper[4865]: I0123 12:58:11.382825 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/healthz\": dial tcp 10.217.0.74:6080: connect: connection refused" Jan 23 12:58:11 crc kubenswrapper[4865]: I0123 12:58:11.489083 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:11 crc kubenswrapper[4865]: I0123 12:58:11.489158 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.661775 4865 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-qtxv5 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.661841 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.20:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.737171 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.737241 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.769071 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" probeResult="failure" output="command timed out" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.774414 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" containerName="nbdb" probeResult="failure" output="command timed out" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.774436 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" containerName="sbdb" probeResult="failure" output="command timed out" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.774526 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.774576 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.957671 4865 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:11.957720 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.348519 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.208220391s: [/var/lib/containers/storage/overlay/ce6a5e4f85afec31c576cb47ae0d69cb7fc03072c699cdd93772e741d6ca0ad3/diff /var/log/pods/openstack_neutron-b9c4785f9-kx698_a6bc30aa-5b02-4c6b-ac0e-43799b7929dd/neutron-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.356576 4865 generic.go:334] "Generic (PLEG): container finished" podID="9faffae5-73bb-4980-8092-b79a6888476d" containerID="d6acb080922eeaa2369550b183fe958e11a57932e047f282f28c4fa5f378419b" exitCode=137 Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.373504 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.373567 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerDied","Data":"d6acb080922eeaa2369550b183fe958e11a57932e047f282f28c4fa5f378419b"} Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.386268 4865 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-sgsqx container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.386339 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.391248 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.391307 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.437564 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.530521 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.574408 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.574491 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.576789 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" Jan 23 12:58:12 crc kubenswrapper[4865]: I0123 12:58:12.654718 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.46:7472/metrics\": dial tcp 10.217.0.46:7472: connect: connection refused" Jan 23 12:58:12 crc kubenswrapper[4865]: E0123 12:58:12.712205 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:12 crc kubenswrapper[4865]: E0123 12:58:12.713466 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:12 crc kubenswrapper[4865]: E0123 12:58:12.714835 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:12 crc kubenswrapper[4865]: E0123 12:58:12.714903 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.338920 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.48:29150/metrics\": dial tcp 10.217.0.48:29150: connect: connection refused" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.390218 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.390292 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.462911 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.463009 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.527883 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.573766 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.575641 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-8lsbn_cfe7c397-99ae-494d-a418-b0f08568f156/console-operator/0.log" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.575713 4865 generic.go:334] "Generic (PLEG): container finished" podID="cfe7c397-99ae-494d-a418-b0f08568f156" containerID="45ea759c1c5e5541e38c656a91725ebb01f67b53b71f6d6ca75e869cf22a64ba" exitCode=1 Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.575861 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" event={"ID":"cfe7c397-99ae-494d-a418-b0f08568f156","Type":"ContainerDied","Data":"45ea759c1c5e5541e38c656a91725ebb01f67b53b71f6d6ca75e869cf22a64ba"} Jan 23 12:58:13 crc kubenswrapper[4865]: E0123 12:58:13.584996 4865 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.614258 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" probeResult="failure" output="" Jan 23 12:58:13 crc kubenswrapper[4865]: E0123 12:58:13.617037 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:13 crc kubenswrapper[4865]: E0123 12:58:13.617643 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:13 crc kubenswrapper[4865]: E0123 12:58:13.618004 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:13 crc kubenswrapper[4865]: E0123 12:58:13.618082 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" Jan 23 12:58:13 crc kubenswrapper[4865]: E0123 12:58:13.618480 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:13 crc kubenswrapper[4865]: E0123 12:58:13.620644 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:13 crc kubenswrapper[4865]: E0123 12:58:13.621018 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:13 crc kubenswrapper[4865]: E0123 12:58:13.621071 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.776386 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.793005 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": dial tcp 10.217.0.47:7572: connect: connection refused" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.816094 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.839172 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.839253 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.855000 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Jan 23 12:58:13 crc kubenswrapper[4865]: [+]has-synced ok Jan 23 12:58:13 crc kubenswrapper[4865]: [-]process-running failed: reason withheld Jan 23 12:58:13 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.855081 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.883510 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 23 12:58:13 crc kubenswrapper[4865]: I0123 12:58:13.883560 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.107703 4865 generic.go:334] "Generic (PLEG): container finished" podID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerID="54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389" exitCode=0 Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.107798 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerDied","Data":"54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389"} Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.165951 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.166011 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.228686 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.228754 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.228911 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.229003 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.229069 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.229129 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.401843 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-apiserver" containerStatusID={"Type":"cri-o","ID":"5a66ef84b06b7efe6172f489b5b4f6cad35b034791d3c7a4bbff3f436519ae0e"} pod="openshift-kube-apiserver/kube-apiserver-crc" containerMessage="Container kube-apiserver failed liveness probe, will be restarted" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.401995 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" containerID="cri-o://5a66ef84b06b7efe6172f489b5b4f6cad35b034791d3c7a4bbff3f436519ae0e" gracePeriod=15 Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.517749 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.518053 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.565971 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.773627 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" probeResult="failure" output="command timed out" Jan 23 12:58:14 crc kubenswrapper[4865]: E0123 12:58:14.775544 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:14 crc kubenswrapper[4865]: E0123 12:58:14.777360 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:14 crc kubenswrapper[4865]: E0123 12:58:14.779121 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:14 crc kubenswrapper[4865]: E0123 12:58:14.779154 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.804178 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:58:14 crc kubenswrapper[4865]: I0123 12:58:14.902248 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.050769 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.051161 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.129775 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:58:15 crc kubenswrapper[4865]: E0123 12:58:15.137268 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.150760 4865 generic.go:334] "Generic (PLEG): container finished" podID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" exitCode=0 Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.150951 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxxg" event={"ID":"2bcb4671-0b01-435d-aa4b-b9596654bfff","Type":"ContainerDied","Data":"4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630"} Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.153272 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/0.log" Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.165036 4865 generic.go:334] "Generic (PLEG): container finished" podID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerID="eeb461b6cb630a97f0fbc5e12f059a8993d241deb81f696209187f4282c21944" exitCode=137 Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.165239 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerDied","Data":"eeb461b6cb630a97f0fbc5e12f059a8993d241deb81f696209187f4282c21944"} Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.182008 4865 generic.go:334] "Generic (PLEG): container finished" podID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" exitCode=0 Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.182231 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh6cp" event={"ID":"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2","Type":"ContainerDied","Data":"578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1"} Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.187771 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.197419 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.197474 4865 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="716198fba845e9e3bc3c1765621977f292a6e8ecb4f116f58a73e19b9cb9cabf" exitCode=1 Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.197591 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"716198fba845e9e3bc3c1765621977f292a6e8ecb4f116f58a73e19b9cb9cabf"} Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.197707 4865 scope.go:117] "RemoveContainer" containerID="48d901fc7497a7255ab90307936e20b637e75dfb1cbe169918fbbb925886ade4" Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.201311 4865 generic.go:334] "Generic (PLEG): container finished" podID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" exitCode=0 Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.201397 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhd4g" event={"ID":"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f","Type":"ContainerDied","Data":"d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff"} Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.205864 4865 generic.go:334] "Generic (PLEG): container finished" podID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" exitCode=0 Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.206193 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqvjg" event={"ID":"67ef4926-eb81-4d83-a9a1-4b7e9035892f","Type":"ContainerDied","Data":"68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec"} Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.550851 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:15 crc kubenswrapper[4865]: I0123 12:58:15.551262 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.218057 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": dial tcp 10.217.0.53:8081: connect: connection refused" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.225307 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.54:8081/readyz\": dial tcp 10.217.0.54:8081: connect: connection refused" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.227042 4865 generic.go:334] "Generic (PLEG): container finished" podID="e92ddc14-bdb6-4407-b8a3-047079030166" containerID="593f87f62b3ccdf0be76949bdac5a423993e1d8217741c16ed8d4bfe28a7e56c" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.227136 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" event={"ID":"e92ddc14-bdb6-4407-b8a3-047079030166","Type":"ContainerDied","Data":"593f87f62b3ccdf0be76949bdac5a423993e1d8217741c16ed8d4bfe28a7e56c"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.228577 4865 scope.go:117] "RemoveContainer" containerID="593f87f62b3ccdf0be76949bdac5a423993e1d8217741c16ed8d4bfe28a7e56c" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.230285 4865 generic.go:334] "Generic (PLEG): container finished" podID="97f32b90-08dc-4333-95e6-a2e85648931f" containerID="f7db30f04928c52c6d1185acbe8a775b6211677b2574d48e1b3cd288e7764e52" exitCode=0 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.230353 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" event={"ID":"97f32b90-08dc-4333-95e6-a2e85648931f","Type":"ContainerDied","Data":"f7db30f04928c52c6d1185acbe8a775b6211677b2574d48e1b3cd288e7764e52"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.232557 4865 generic.go:334] "Generic (PLEG): container finished" podID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerID="cc196a7d0a8483e448852dd3080814eafacc01c4fa3eef717a29e19532163b8f" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.232619 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" event={"ID":"bdf8f14b-af0d-43cc-b624-7dab2879dc4b","Type":"ContainerDied","Data":"cc196a7d0a8483e448852dd3080814eafacc01c4fa3eef717a29e19532163b8f"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.235015 4865 generic.go:334] "Generic (PLEG): container finished" podID="c8896518-4b5b-4712-9994-0bb445a3504f" containerID="9e2d67f5b624196c2ca7a39eb784d04a4289c7a162f39ffd49470ee7ed4b98ed" exitCode=0 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.235070 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" event={"ID":"c8896518-4b5b-4712-9994-0bb445a3504f","Type":"ContainerDied","Data":"9e2d67f5b624196c2ca7a39eb784d04a4289c7a162f39ffd49470ee7ed4b98ed"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.238274 4865 generic.go:334] "Generic (PLEG): container finished" podID="661fbfd2-7d52-419a-943f-c57854d2306b" containerID="2504392e494c1ef358cfb124eb480bdbf70a7733b9f7b625220f52033a353160" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.238308 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" event={"ID":"661fbfd2-7d52-419a-943f-c57854d2306b","Type":"ContainerDied","Data":"2504392e494c1ef358cfb124eb480bdbf70a7733b9f7b625220f52033a353160"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.240263 4865 generic.go:334] "Generic (PLEG): container finished" podID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerID="fe8e5fdd26caa016dbb63f464761d687129188d8bc9524e4503f2cdbb1d13171" exitCode=0 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.240308 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" event={"ID":"60877fc9-78f8-4298-8104-8cd90e28d3bd","Type":"ContainerDied","Data":"fe8e5fdd26caa016dbb63f464761d687129188d8bc9524e4503f2cdbb1d13171"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.242408 4865 generic.go:334] "Generic (PLEG): container finished" podID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" exitCode=0 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.242457 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hzwqc" event={"ID":"c011a295-505e-465c-a8d6-a647d7ad8ed2","Type":"ContainerDied","Data":"13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.243869 4865 generic.go:334] "Generic (PLEG): container finished" podID="a9bb243e-e7c3-4f68-be35-d86fa049c570" containerID="d16a099e596d85c91fd0fa1d94c0861d76e76c4983032b1c5e97e173ecc3c6c4" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.243903 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" event={"ID":"a9bb243e-e7c3-4f68-be35-d86fa049c570","Type":"ContainerDied","Data":"d16a099e596d85c91fd0fa1d94c0861d76e76c4983032b1c5e97e173ecc3c6c4"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.245795 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerID="c51c964c06647f878163c7193cda0d69f17f715564a8f339956514f2b970af5a" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.245832 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" event={"ID":"b2ea2452-dc3b-4b93-a9d4-e562a63111c9","Type":"ContainerDied","Data":"c51c964c06647f878163c7193cda0d69f17f715564a8f339956514f2b970af5a"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.247564 4865 generic.go:334] "Generic (PLEG): container finished" podID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerID="476d6bdbc43b8e01fb3e9f46b5fac5875299d36c4c9a12328874015faac89f4f" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.247649 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" event={"ID":"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e","Type":"ContainerDied","Data":"476d6bdbc43b8e01fb3e9f46b5fac5875299d36c4c9a12328874015faac89f4f"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.250321 4865 generic.go:334] "Generic (PLEG): container finished" podID="93194445-a021-4960-ab82-085f13cc959d" containerID="1b9b6be821f701ac56b53a484a353bea5212b6f02ef587724d911e861b2fc97c" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.250360 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" event={"ID":"93194445-a021-4960-ab82-085f13cc959d","Type":"ContainerDied","Data":"1b9b6be821f701ac56b53a484a353bea5212b6f02ef587724d911e861b2fc97c"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.252974 4865 generic.go:334] "Generic (PLEG): container finished" podID="8e227974-40b8-4d16-8d5f-961b705a9740" containerID="0b2a7803942b15c05aaa94d320090897efd8100e0a4bcd07a1a0e623a23a3516" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.253015 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" event={"ID":"8e227974-40b8-4d16-8d5f-961b705a9740","Type":"ContainerDied","Data":"0b2a7803942b15c05aaa94d320090897efd8100e0a4bcd07a1a0e623a23a3516"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.256992 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerStarted","Data":"32f90a1a6ab5a61d1d5a4a1eba24f87dbec2aef63107d25e9e01782e3a02c493"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.257077 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.259480 4865 generic.go:334] "Generic (PLEG): container finished" podID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerID="129bfde63977859660c6eb3aa9e50a03c29e7268576ca70bbc6f2ad00f8febc8" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.259581 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" event={"ID":"dbfec6f5-80b4-480f-a958-c3107b2776c0","Type":"ContainerDied","Data":"129bfde63977859660c6eb3aa9e50a03c29e7268576ca70bbc6f2ad00f8febc8"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.259798 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": dial tcp 10.217.0.55:8081: connect: connection refused" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.261481 4865 generic.go:334] "Generic (PLEG): container finished" podID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerID="34c12403002230bf2149bbd73d264e2d87708fb1feba635b4fb8637cfcefe7d5" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.261539 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" event={"ID":"1959a742-ade2-4266-9a93-e96a1b6e3908","Type":"ContainerDied","Data":"34c12403002230bf2149bbd73d264e2d87708fb1feba635b4fb8637cfcefe7d5"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.263875 4865 generic.go:334] "Generic (PLEG): container finished" podID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerID="e3fe3b1865710694ccdd89df2ca4de17a4db373f4f67811172ced80874644711" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.263921 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" event={"ID":"2c3366d9-565f-4601-acbb-b473dcfe126c","Type":"ContainerDied","Data":"e3fe3b1865710694ccdd89df2ca4de17a4db373f4f67811172ced80874644711"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.266499 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" event={"ID":"9177b0d0-3ce7-40fe-8567-85cb8dd5227a","Type":"ContainerStarted","Data":"220ca834be13b31e6269099d1d8bc1f8f8a8374c70fc5e8ee1abc9fd26326377"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.266644 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.339805 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.56:8081/readyz\": dial tcp 10.217.0.56:8081: connect: connection refused" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.348245 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": dial tcp 10.217.0.57:8081: connect: connection refused" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.382255 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.74:6080/healthz\": dial tcp 10.217.0.74:6080: connect: connection refused" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.382355 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.504242 4865 scope.go:117] "RemoveContainer" containerID="cc196a7d0a8483e448852dd3080814eafacc01c4fa3eef717a29e19532163b8f" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.505041 4865 scope.go:117] "RemoveContainer" containerID="2504392e494c1ef358cfb124eb480bdbf70a7733b9f7b625220f52033a353160" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.505290 4865 scope.go:117] "RemoveContainer" containerID="d16a099e596d85c91fd0fa1d94c0861d76e76c4983032b1c5e97e173ecc3c6c4" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.505523 4865 scope.go:117] "RemoveContainer" containerID="c51c964c06647f878163c7193cda0d69f17f715564a8f339956514f2b970af5a" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.505787 4865 scope.go:117] "RemoveContainer" containerID="476d6bdbc43b8e01fb3e9f46b5fac5875299d36c4c9a12328874015faac89f4f" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.506176 4865 scope.go:117] "RemoveContainer" containerID="1b9b6be821f701ac56b53a484a353bea5212b6f02ef587724d911e861b2fc97c" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.506312 4865 scope.go:117] "RemoveContainer" containerID="0b2a7803942b15c05aaa94d320090897efd8100e0a4bcd07a1a0e623a23a3516" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.506382 4865 scope.go:117] "RemoveContainer" containerID="129bfde63977859660c6eb3aa9e50a03c29e7268576ca70bbc6f2ad00f8febc8" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.506722 4865 scope.go:117] "RemoveContainer" containerID="34c12403002230bf2149bbd73d264e2d87708fb1feba635b4fb8637cfcefe7d5" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.506873 4865 scope.go:117] "RemoveContainer" containerID="e3fe3b1865710694ccdd89df2ca4de17a4db373f4f67811172ced80874644711" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.506881 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.510151 4865 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.510224 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.511029 4865 scope.go:117] "RemoveContainer" containerID="ad61c2892a57901c8ddf3786866d73b9d89545eaba683bfd51798b77ae58c659" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.515372 4865 generic.go:334] "Generic (PLEG): container finished" podID="d2f4bfa4-63e2-418a-b52a-75d2992af596" containerID="efa85f7f325947f3c6e17fa6b4b0e0f0e4613a29c14fa6a93c768879ca7375db" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.515441 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" event={"ID":"d2f4bfa4-63e2-418a-b52a-75d2992af596","Type":"ContainerDied","Data":"efa85f7f325947f3c6e17fa6b4b0e0f0e4613a29c14fa6a93c768879ca7375db"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.515846 4865 scope.go:117] "RemoveContainer" containerID="efa85f7f325947f3c6e17fa6b4b0e0f0e4613a29c14fa6a93c768879ca7375db" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.518415 4865 generic.go:334] "Generic (PLEG): container finished" podID="0167f850-ba43-426a-8c56-aa171131e7da" containerID="c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.518457 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" event={"ID":"0167f850-ba43-426a-8c56-aa171131e7da","Type":"ContainerDied","Data":"c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.518756 4865 scope.go:117] "RemoveContainer" containerID="c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.530776 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" containerID="cri-o://7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" gracePeriod=17 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.541662 4865 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1" exitCode=0 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.541826 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fadf00caebf0f5e86b95a60436cabe3d728fed117d4a7ac422bf21c949a5ead1"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.545807 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" event={"ID":"1405b73d-070d-495e-a80d-46fc2505ff8c","Type":"ContainerStarted","Data":"9dcf35e0bb514847a42d1b846f85d4960f429c68eb25b5b14fd48cc6c324b127"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.546332 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.557308 4865 generic.go:334] "Generic (PLEG): container finished" podID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerID="a2cc111c5b050a0cea0b3665386dcd21df0b26072f8ef117916e8082c8b01f56" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.557417 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" event={"ID":"8ef0fdaa-8086-467d-8106-5c6dec532dba","Type":"ContainerDied","Data":"a2cc111c5b050a0cea0b3665386dcd21df0b26072f8ef117916e8082c8b01f56"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.558196 4865 scope.go:117] "RemoveContainer" containerID="a2cc111c5b050a0cea0b3665386dcd21df0b26072f8ef117916e8082c8b01f56" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.563983 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" containerID="cri-o://50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" gracePeriod=16 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.569053 4865 generic.go:334] "Generic (PLEG): container finished" podID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerID="05c32a9f69fa45c4c849c2c0593634a1d358994f1d3669db97162d3139e34baf" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.569130 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" event={"ID":"5fb13a32-67c3-46b1-a0b8-573e941e6c7e","Type":"ContainerDied","Data":"05c32a9f69fa45c4c849c2c0593634a1d358994f1d3669db97162d3139e34baf"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.569744 4865 scope.go:117] "RemoveContainer" containerID="05c32a9f69fa45c4c849c2c0593634a1d358994f1d3669db97162d3139e34baf" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.575734 4865 generic.go:334] "Generic (PLEG): container finished" podID="50ab40ef-54b8-4392-89ad-6b73c346c225" containerID="92d4c517ce6499ebcbda5be7b0086e4746751e676cc9d2a3ff865034f2adc980" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.575796 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" event={"ID":"50ab40ef-54b8-4392-89ad-6b73c346c225","Type":"ContainerDied","Data":"92d4c517ce6499ebcbda5be7b0086e4746751e676cc9d2a3ff865034f2adc980"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.576358 4865 scope.go:117] "RemoveContainer" containerID="92d4c517ce6499ebcbda5be7b0086e4746751e676cc9d2a3ff865034f2adc980" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.584300 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.584907 4865 generic.go:334] "Generic (PLEG): container finished" podID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerID="23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450" exitCode=0 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.584993 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerDied","Data":"23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.589311 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-7954f5f757-48b72_bdee5ba9-99e1-495c-9b52-f670cbbffea2/download-server/0.log" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.589391 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-48b72" event={"ID":"bdee5ba9-99e1-495c-9b52-f670cbbffea2","Type":"ContainerStarted","Data":"139c69e5075bf3db75147fdc4a4a177cb6ea789840fedac8f8ce4b2131eae797"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.589781 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 12:58:16 crc kubenswrapper[4865]: E0123 12:58:16.590115 4865 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-hz4vm" message=< Jan 23 12:58:16 crc kubenswrapper[4865]: Exiting ovn-controller (1) [ OK ] Jan 23 12:58:16 crc kubenswrapper[4865]: > Jan 23 12:58:16 crc kubenswrapper[4865]: E0123 12:58:16.590140 4865 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" containerID="cri-o://fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.590175 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" containerID="cri-o://fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" gracePeriod=18 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.590420 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.590469 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.601293 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" containerID="970fbd29e7ad027b715d5162c082d4b785da78e0c7cbe380974c284c1f434308" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.601372 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" event={"ID":"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b","Type":"ContainerDied","Data":"970fbd29e7ad027b715d5162c082d4b785da78e0c7cbe380974c284c1f434308"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.602236 4865 scope.go:117] "RemoveContainer" containerID="970fbd29e7ad027b715d5162c082d4b785da78e0c7cbe380974c284c1f434308" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.614565 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-swk7h_3fbcdfcf-19cc-46b9-a986-bd9426751459/router/0.log" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.614640 4865 generic.go:334] "Generic (PLEG): container finished" podID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerID="811113bc6d4b48797e3257d6ecb031d47d0fdf3124b5f86b193d7c7a21255914" exitCode=137 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.614736 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-swk7h" event={"ID":"3fbcdfcf-19cc-46b9-a986-bd9426751459","Type":"ContainerDied","Data":"811113bc6d4b48797e3257d6ecb031d47d0fdf3124b5f86b193d7c7a21255914"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.618792 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": dial tcp 10.217.0.61:8081: connect: connection refused" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.621429 4865 generic.go:334] "Generic (PLEG): container finished" podID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" containerID="248ff4a5d09fe9269983ef79eb681bb0f4f096314c413f09a1f52a736d0e4913" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.621524 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" event={"ID":"840bd4e6-18da-498a-bd3a-d4e80c69ec70","Type":"ContainerDied","Data":"248ff4a5d09fe9269983ef79eb681bb0f4f096314c413f09a1f52a736d0e4913"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.622493 4865 scope.go:117] "RemoveContainer" containerID="248ff4a5d09fe9269983ef79eb681bb0f4f096314c413f09a1f52a736d0e4913" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.628587 4865 generic.go:334] "Generic (PLEG): container finished" podID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerID="34eabd6c502550b118ebbab06e0e826b6e3ea3a716d028a059c8e0fdcc47a0d5" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.628668 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" event={"ID":"6d4fbfc8-900e-4c44-a458-039d37a6dd40","Type":"ContainerDied","Data":"34eabd6c502550b118ebbab06e0e826b6e3ea3a716d028a059c8e0fdcc47a0d5"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.629349 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8081/readyz\": dial tcp 10.217.0.222:8081: connect: connection refused" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.629413 4865 scope.go:117] "RemoveContainer" containerID="34eabd6c502550b118ebbab06e0e826b6e3ea3a716d028a059c8e0fdcc47a0d5" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.636914 4865 generic.go:334] "Generic (PLEG): container finished" podID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" containerID="5286c16ec9fce398db9582fe2fc7bb61df7b87e42ac85b0d231655c2783a9fa6" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.637011 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" event={"ID":"4836de1a-4a0e-4d02-af0e-3408b4814ecf","Type":"ContainerDied","Data":"5286c16ec9fce398db9582fe2fc7bb61df7b87e42ac85b0d231655c2783a9fa6"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.638108 4865 scope.go:117] "RemoveContainer" containerID="5286c16ec9fce398db9582fe2fc7bb61df7b87e42ac85b0d231655c2783a9fa6" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.642418 4865 generic.go:334] "Generic (PLEG): container finished" podID="10627175-8e39-4799-bec7-c0b49b938a29" containerID="1fdbf97e657e0bfd89c2f730d3b5c9a07d8e976682f4a06188f4ac6b2e76428f" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.642463 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" event={"ID":"10627175-8e39-4799-bec7-c0b49b938a29","Type":"ContainerDied","Data":"1fdbf97e657e0bfd89c2f730d3b5c9a07d8e976682f4a06188f4ac6b2e76428f"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.643156 4865 scope.go:117] "RemoveContainer" containerID="1fdbf97e657e0bfd89c2f730d3b5c9a07d8e976682f4a06188f4ac6b2e76428f" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.648026 4865 generic.go:334] "Generic (PLEG): container finished" podID="582f83b4-97dc-4f56-9879-c73fab80488a" containerID="adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064" exitCode=0 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.648112 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerDied","Data":"adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.653649 4865 generic.go:334] "Generic (PLEG): container finished" podID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerID="63205f38181e8e7e4b899f35881b81bd6c72eba848992c5ee8006e2f0700a70e" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.653728 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" event={"ID":"da1cf187-8918-46b4-ab33-e8912c9d0dd6","Type":"ContainerDied","Data":"63205f38181e8e7e4b899f35881b81bd6c72eba848992c5ee8006e2f0700a70e"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.654552 4865 scope.go:117] "RemoveContainer" containerID="63205f38181e8e7e4b899f35881b81bd6c72eba848992c5ee8006e2f0700a70e" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.661797 4865 generic.go:334] "Generic (PLEG): container finished" podID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" containerID="ce6ae7c2846a936cf92acff3471ab484efba821c99400e231c47bf24e176f43e" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.662108 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" event={"ID":"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73","Type":"ContainerDied","Data":"ce6ae7c2846a936cf92acff3471ab484efba821c99400e231c47bf24e176f43e"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.662916 4865 scope.go:117] "RemoveContainer" containerID="ce6ae7c2846a936cf92acff3471ab484efba821c99400e231c47bf24e176f43e" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.666878 4865 generic.go:334] "Generic (PLEG): container finished" podID="967c3782-1bce-4145-8244-7650fe19dc22" containerID="76f1ecd5b0730a0e64ce51eb0d79c203a16172b032a4b3c0ff734fdda3df422e" exitCode=1 Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.666932 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" event={"ID":"967c3782-1bce-4145-8244-7650fe19dc22","Type":"ContainerDied","Data":"76f1ecd5b0730a0e64ce51eb0d79c203a16172b032a4b3c0ff734fdda3df422e"} Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.667857 4865 scope.go:117] "RemoveContainer" containerID="76f1ecd5b0730a0e64ce51eb0d79c203a16172b032a4b3c0ff734fdda3df422e" Jan 23 12:58:16 crc kubenswrapper[4865]: I0123 12:58:16.826218 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.066166 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.066223 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 23 12:58:17 crc kubenswrapper[4865]: E0123 12:58:17.121181 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3685d2b2_151b_479a_92c1_ae400eacd1b9.slice/crio-conmon-32f90a1a6ab5a61d1d5a4a1eba24f87dbec2aef63107d25e9e01782e3a02c493.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7d3e9f0_4ed8_427a_bca3_b0403d23d8fb.slice/crio-conmon-a61639e591677ff6558a3b162b8b0f56384d87f66c1768087f94fff3e4308e0f.scope\": RecentStats: unable to find data in memory cache]" Jan 23 12:58:17 crc kubenswrapper[4865]: E0123 12:58:17.133353 4865 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/glance-glance-default-internal-api-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/glance-glance-default-internal-api-0\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openstack/glance-default-internal-api-0" volumeName="glance" Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.294408 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:17 crc kubenswrapper[4865]: E0123 12:58:17.304484 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:17 crc kubenswrapper[4865]: E0123 12:58:17.304966 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:17 crc kubenswrapper[4865]: E0123 12:58:17.305314 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:17 crc kubenswrapper[4865]: E0123 12:58:17.305387 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.706980 4865 generic.go:334] "Generic (PLEG): container finished" podID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerID="a61639e591677ff6558a3b162b8b0f56384d87f66c1768087f94fff3e4308e0f" exitCode=137 Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.707298 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerDied","Data":"a61639e591677ff6558a3b162b8b0f56384d87f66c1768087f94fff3e4308e0f"} Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.709269 4865 generic.go:334] "Generic (PLEG): container finished" podID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerID="220ca834be13b31e6269099d1d8bc1f8f8a8374c70fc5e8ee1abc9fd26326377" exitCode=1 Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.709332 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" event={"ID":"9177b0d0-3ce7-40fe-8567-85cb8dd5227a","Type":"ContainerDied","Data":"220ca834be13b31e6269099d1d8bc1f8f8a8374c70fc5e8ee1abc9fd26326377"} Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.709915 4865 scope.go:117] "RemoveContainer" containerID="220ca834be13b31e6269099d1d8bc1f8f8a8374c70fc5e8ee1abc9fd26326377" Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.722966 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.723317 4865 generic.go:334] "Generic (PLEG): container finished" podID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerID="9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945" exitCode=0 Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.723420 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerDied","Data":"9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945"} Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.729259 4865 generic.go:334] "Generic (PLEG): container finished" podID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" exitCode=0 Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.729509 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm" event={"ID":"d8331842-a45a-4cbf-a55b-0d8dde7f69eb","Type":"ContainerDied","Data":"fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8"} Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.752218 4865 generic.go:334] "Generic (PLEG): container finished" podID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerID="32f90a1a6ab5a61d1d5a4a1eba24f87dbec2aef63107d25e9e01782e3a02c493" exitCode=1 Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.752332 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerDied","Data":"32f90a1a6ab5a61d1d5a4a1eba24f87dbec2aef63107d25e9e01782e3a02c493"} Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.752952 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.753007 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 12:58:17 crc kubenswrapper[4865]: I0123 12:58:17.753651 4865 scope.go:117] "RemoveContainer" containerID="32f90a1a6ab5a61d1d5a4a1eba24f87dbec2aef63107d25e9e01782e3a02c493" Jan 23 12:58:18 crc kubenswrapper[4865]: I0123 12:58:18.467882 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" containerID="cri-o://02538d47a4f7198d06ac45cdff31ecba4f646e402e14499af85a0e57e26dbec9" gracePeriod=13 Jan 23 12:58:18 crc kubenswrapper[4865]: I0123 12:58:18.552728 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:18 crc kubenswrapper[4865]: I0123 12:58:18.553057 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:18 crc kubenswrapper[4865]: I0123 12:58:18.686225 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 23 12:58:18 crc kubenswrapper[4865]: I0123 12:58:18.686270 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 23 12:58:18 crc kubenswrapper[4865]: I0123 12:58:18.768283 4865 generic.go:334] "Generic (PLEG): container finished" podID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerID="7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c" exitCode=1 Jan 23 12:58:18 crc kubenswrapper[4865]: I0123 12:58:18.768340 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerDied","Data":"7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c"} Jan 23 12:58:18 crc kubenswrapper[4865]: I0123 12:58:18.771748 4865 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5a66ef84b06b7efe6172f489b5b4f6cad35b034791d3c7a4bbff3f436519ae0e" exitCode=0 Jan 23 12:58:18 crc kubenswrapper[4865]: I0123 12:58:18.771777 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5a66ef84b06b7efe6172f489b5b4f6cad35b034791d3c7a4bbff3f436519ae0e"} Jan 23 12:58:18 crc kubenswrapper[4865]: I0123 12:58:18.775650 4865 scope.go:117] "RemoveContainer" containerID="d3d1a0d7a2dfbb419198472561c8b84f95b853d7374fc21ee4c10bfa5a6a34a1" Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.145640 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.145695 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.163428 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.355799 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:19 crc kubenswrapper[4865]: E0123 12:58:19.754068 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:19 crc kubenswrapper[4865]: E0123 12:58:19.759217 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:19 crc kubenswrapper[4865]: E0123 12:58:19.763539 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:19 crc kubenswrapper[4865]: E0123 12:58:19.763617 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.781566 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d7d54b946-29gbz_9e2332f2-6e3b-4355-9af1-24a8980c7d8a/console/0.log" Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.782085 4865 generic.go:334] "Generic (PLEG): container finished" podID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerID="02538d47a4f7198d06ac45cdff31ecba4f646e402e14499af85a0e57e26dbec9" exitCode=2 Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.782210 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d54b946-29gbz" event={"ID":"9e2332f2-6e3b-4355-9af1-24a8980c7d8a","Type":"ContainerDied","Data":"02538d47a4f7198d06ac45cdff31ecba4f646e402e14499af85a0e57e26dbec9"} Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.783851 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" event={"ID":"2699af1d-57a0-4ce2-9550-b423f9eafc0f","Type":"ContainerStarted","Data":"432a1a4071bbfc164b3e505f3e4dcae88a37d22aa2a060b89d9d61d60cbf9348"} Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.784049 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.784639 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.784773 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.785768 4865 generic.go:334] "Generic (PLEG): container finished" podID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerID="5742c7fb47488dc050a829f7c69eb88fe730402225438befdb4f7b95a364495a" exitCode=0 Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.785897 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerDied","Data":"5742c7fb47488dc050a829f7c69eb88fe730402225438befdb4f7b95a364495a"} Jan 23 12:58:19 crc kubenswrapper[4865]: I0123 12:58:19.787242 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 12:58:20 crc kubenswrapper[4865]: E0123 12:58:20.232562 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:20 crc kubenswrapper[4865]: E0123 12:58:20.233129 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:20 crc kubenswrapper[4865]: E0123 12:58:20.233492 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:20 crc kubenswrapper[4865]: E0123 12:58:20.233517 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.275057 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:20 crc kubenswrapper[4865]: E0123 12:58:20.457440 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:20 crc kubenswrapper[4865]: E0123 12:58:20.458587 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:20 crc kubenswrapper[4865]: E0123 12:58:20.459235 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:20 crc kubenswrapper[4865]: E0123 12:58:20.459299 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.489037 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.489114 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.642169 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.642781 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.643144 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.643635 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.644033 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.644343 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.644792 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.645155 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.645453 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.646072 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.646451 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.647006 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.649778 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.650257 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.650786 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.651473 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.651799 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.652054 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.652269 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.652505 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.652790 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.653425 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.653755 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.654016 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.654241 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.654487 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.654756 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.654971 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.655190 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.655412 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.655669 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.655901 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.656287 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.656519 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.658228 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.658700 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.659049 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.659678 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.660872 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.661282 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.661712 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.662205 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.662797 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.662792 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.663352 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.663943 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.664209 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.664417 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.664691 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.664934 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.665192 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.665448 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.665726 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.665932 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.666123 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.666339 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.666549 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.666783 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:20 crc kubenswrapper[4865]: I0123 12:58:20.666977 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.675450 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.675893 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.676296 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.676675 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.677116 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.677617 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.677981 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.678329 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.678720 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.679048 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.679405 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.679888 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.680275 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.680665 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.681032 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.681373 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.681827 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.682172 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.682551 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.682910 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.683266 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.683633 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.684003 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.684375 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.684726 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.685083 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.685532 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.685907 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.686243 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.686591 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.686989 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.687320 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.687800 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.688161 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.688639 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.689370 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.689852 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.690250 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.690728 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.691085 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.691483 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.692215 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.692578 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.692977 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.702440 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.722923 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.742493 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.762249 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: E0123 12:58:20.782658 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.783849 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: E0123 12:58:20.784813 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:21 crc kubenswrapper[4865]: E0123 12:58:20.786879 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:21 crc kubenswrapper[4865]: E0123 12:58:20.786954 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.802949 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.823402 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.842014 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.862386 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.882255 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.902344 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.922755 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.942345 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.962200 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:20.983266 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.003308 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.022224 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: E0123 12:58:21.163310 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event=< Jan 23 12:58:21 crc kubenswrapper[4865]: &Event{ObjectMeta:{catalog-operator-68c6474976-42cdm.188d5d87792a6b03 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:catalog-operator-68c6474976-42cdm,UID:843c383b-053f-42f5-88ce-7a216f5354a3,APIVersion:v1,ResourceVersion:27211,FieldPath:spec.containers{catalog-operator},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.217.0.26:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 23 12:58:21 crc kubenswrapper[4865]: body: Jan 23 12:58:21 crc kubenswrapper[4865]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 12:57:45.061264131 +0000 UTC m=+3909.230336357,LastTimestamp:2026-01-23 12:57:45.061264131 +0000 UTC m=+3909.230336357,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 12:58:21 crc kubenswrapper[4865]: > Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.169347 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.170061 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.175408 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.175707 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.175921 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.176138 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.176623 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.186781 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.204200 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.224041 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.242637 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.262891 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.282328 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.302804 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.322123 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.342713 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.362854 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.382165 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.402803 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.423016 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.442378 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.462775 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.482423 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.503116 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.522865 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.542888 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.551396 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.551442 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.562831 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.582097 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.603233 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.622040 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.642713 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.662738 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.682439 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.699004 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.702994 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.722052 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.736720 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.736769 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.744586 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.762771 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.782041 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.798686 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.798794 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.802164 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.822519 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.843202 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.863313 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.882992 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.902926 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.922690 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.942481 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.958172 4865 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.958233 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.962339 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:21 crc kubenswrapper[4865]: I0123 12:58:21.982531 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:22 crc kubenswrapper[4865]: I0123 12:58:22.002720 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:22 crc kubenswrapper[4865]: I0123 12:58:22.008975 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:58:22 crc kubenswrapper[4865]: I0123 12:58:22.009028 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:58:22 crc kubenswrapper[4865]: I0123 12:58:22.022648 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:22 crc kubenswrapper[4865]: I0123 12:58:22.043167 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:22 crc kubenswrapper[4865]: I0123 12:58:22.062567 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:22 crc kubenswrapper[4865]: I0123 12:58:22.082513 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:22 crc kubenswrapper[4865]: I0123 12:58:22.102391 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:22 crc kubenswrapper[4865]: I0123 12:58:22.122323 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:22 crc kubenswrapper[4865]: I0123 12:58:22.141313 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:22.142652 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:22.163038 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:22.182044 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:22.202095 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:22.223574 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:22.242449 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.304778 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.305206 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.305480 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.305513 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:22.486047 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:22.654226 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.711427 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.711903 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.712225 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.712287 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.862863 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.863476 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.863788 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:22.863823 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:23.131118 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:23.131482 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:23.132149 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:23 crc kubenswrapper[4865]: E0123 12:58:23.132207 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.280051 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.280154 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.281235 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"69363188c023ec037365e6462967a0eb9169a136bc3d2131e45cd5a55c949188"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.281303 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" containerID="cri-o://69363188c023ec037365e6462967a0eb9169a136bc3d2131e45cd5a55c949188" gracePeriod=30 Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.338482 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.390878 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.390940 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.390939 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.391014 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.462857 4865 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.462917 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.773410 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.773483 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.776748 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.792526 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": dial tcp 10.217.0.47:7572: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.838888 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.838961 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.883792 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 23 12:58:23 crc kubenswrapper[4865]: I0123 12:58:23.883855 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.127943 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.128648 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.129408 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.130130 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.130496 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.130959 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.131186 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.131420 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.131614 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.131857 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.132155 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.132424 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.132614 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.132766 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.132919 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.133128 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.133386 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.133675 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.133879 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.134166 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.134406 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.134673 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.134993 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.135271 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.135521 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.135776 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.136014 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.136261 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.136499 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.136718 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.136967 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.137186 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.137499 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.137786 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.138000 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.138225 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.138476 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.138806 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.139181 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.139421 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.139682 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.140001 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.140229 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.140470 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.140892 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.141117 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.141361 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.141783 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.142063 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.142342 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.142764 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.143030 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.143310 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.143656 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.144779 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.144819 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.144190 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.148209 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.148543 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.148879 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.149225 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.149670 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.149881 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.150262 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.166870 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.166933 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.355573 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.512138 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.512199 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.550691 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.550742 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.804479 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.883404 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.883464 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.883510 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:24 crc kubenswrapper[4865]: I0123 12:58:24.883561 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:25 crc kubenswrapper[4865]: E0123 12:58:25.388101 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:25 crc kubenswrapper[4865]: E0123 12:58:25.388574 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:25 crc kubenswrapper[4865]: E0123 12:58:25.388937 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:25 crc kubenswrapper[4865]: E0123 12:58:25.389939 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:25 crc kubenswrapper[4865]: E0123 12:58:25.390300 4865 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:25 crc kubenswrapper[4865]: I0123 12:58:25.390470 4865 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 12:58:25 crc kubenswrapper[4865]: E0123 12:58:25.391261 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Jan 23 12:58:25 crc kubenswrapper[4865]: E0123 12:58:25.592035 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Jan 23 12:58:25 crc kubenswrapper[4865]: I0123 12:58:25.690477 4865 scope.go:117] "RemoveContainer" containerID="16c3ee308f4ea038e0db292673884d80dda1fbd5964d94cb29ddd5c2ddaa1043" Jan 23 12:58:25 crc kubenswrapper[4865]: I0123 12:58:25.847430 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" event={"ID":"1959a742-ade2-4266-9a93-e96a1b6e3908","Type":"ContainerStarted","Data":"e7440e71b764fc1170b4e582df1fa0de60d00e2cc4d7348e19eb5ccc39b95a74"} Jan 23 12:58:25 crc kubenswrapper[4865]: E0123 12:58:25.992612 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.123521 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.123912 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.124141 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.124363 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.124566 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.124953 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.125253 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.126790 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.127019 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.127583 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.129055 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.131783 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.132306 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.132650 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.132937 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.133234 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.133530 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.133851 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.134256 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.134535 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.134831 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.135154 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.135457 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.135751 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.136160 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.136577 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.136851 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.137114 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.137362 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.137586 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.137850 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.138098 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.138342 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.138547 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.138803 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.139153 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.139414 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.139698 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.139971 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.140446 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.140818 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.141519 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.141980 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.142261 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.142546 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.142862 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.143152 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.143425 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.143706 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.143968 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.144228 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.144513 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.144777 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.145036 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.145353 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.146496 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.146792 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.147069 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.147403 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.147687 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.147950 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.217162 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.224893 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.259374 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.339400 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.348481 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.382934 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.383448 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.618884 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.629821 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8081/readyz\": dial tcp 10.217.0.222:8081: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.700293 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.743983 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.779217 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:58:26 crc kubenswrapper[4865]: E0123 12:58:26.793176 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.845003 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.859442 4865 generic.go:334] "Generic (PLEG): container finished" podID="15434cef-8cb6-4386-b761-143f1819cac8" containerID="2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4" exitCode=1 Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.859489 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" event={"ID":"15434cef-8cb6-4386-b761-143f1819cac8","Type":"ContainerDied","Data":"2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4"} Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.860483 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.861117 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.861430 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.861806 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.862097 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.862422 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.862687 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.862973 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.863276 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.863584 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.863921 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.864276 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.864656 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.865022 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.865301 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.865735 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.865901 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.866069 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.866328 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.866631 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.866946 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.867233 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.867528 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.867857 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.868128 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.868554 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.868969 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.869278 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.869678 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.870179 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.870647 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.870880 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.871157 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.871470 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.871787 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.872065 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.872348 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.872706 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.873042 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.873286 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.873570 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.873883 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.874312 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.874566 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.876026 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.876403 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.876689 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.877013 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.877319 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.877570 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.877881 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.878167 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.878443 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.878764 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.879091 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.879333 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.879666 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.880033 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.880312 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.880569 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.880879 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.881202 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.881482 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.920980 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:58:26 crc kubenswrapper[4865]: I0123 12:58:26.940916 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:58:27 crc kubenswrapper[4865]: I0123 12:58:27.057084 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 23 12:58:27 crc kubenswrapper[4865]: I0123 12:58:27.057230 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 23 12:58:27 crc kubenswrapper[4865]: I0123 12:58:27.118519 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:58:27 crc kubenswrapper[4865]: E0123 12:58:27.119172 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:58:27 crc kubenswrapper[4865]: I0123 12:58:27.191143 4865 scope.go:117] "RemoveContainer" containerID="2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4" Jan 23 12:58:27 crc kubenswrapper[4865]: I0123 12:58:27.255005 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:58:27 crc kubenswrapper[4865]: I0123 12:58:27.296071 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:58:27 crc kubenswrapper[4865]: I0123 12:58:27.299109 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:58:27 crc kubenswrapper[4865]: E0123 12:58:27.304276 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:27 crc kubenswrapper[4865]: E0123 12:58:27.304778 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:27 crc kubenswrapper[4865]: E0123 12:58:27.305488 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:27 crc kubenswrapper[4865]: E0123 12:58:27.305574 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" Jan 23 12:58:27 crc kubenswrapper[4865]: E0123 12:58:27.449073 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event=< Jan 23 12:58:27 crc kubenswrapper[4865]: &Event{ObjectMeta:{catalog-operator-68c6474976-42cdm.188d5d87792a6b03 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:catalog-operator-68c6474976-42cdm,UID:843c383b-053f-42f5-88ce-7a216f5354a3,APIVersion:v1,ResourceVersion:27211,FieldPath:spec.containers{catalog-operator},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.217.0.26:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 23 12:58:27 crc kubenswrapper[4865]: body: Jan 23 12:58:27 crc kubenswrapper[4865]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 12:57:45.061264131 +0000 UTC m=+3909.230336357,LastTimestamp:2026-01-23 12:57:45.061264131 +0000 UTC m=+3909.230336357,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 12:58:27 crc kubenswrapper[4865]: > Jan 23 12:58:27 crc kubenswrapper[4865]: I0123 12:58:27.550356 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:27 crc kubenswrapper[4865]: I0123 12:58:27.550403 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:27 crc kubenswrapper[4865]: I0123 12:58:27.580582 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:58:28 crc kubenswrapper[4865]: E0123 12:58:28.406538 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Jan 23 12:58:28 crc kubenswrapper[4865]: I0123 12:58:28.686375 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 23 12:58:28 crc kubenswrapper[4865]: I0123 12:58:28.686436 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.144912 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.144976 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.145074 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.145790 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.146197 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.146739 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.146948 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.147221 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.147466 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.147953 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.148204 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.148418 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.148648 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.148947 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.149227 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.149438 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.149773 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.150088 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.150439 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.150768 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.151007 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.151287 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.151569 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.151800 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.152051 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.152243 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.152452 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.152681 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.152906 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.153156 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.153509 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.153760 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.154014 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.154233 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.154464 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.154758 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.155038 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.155336 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.155652 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.156468 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.156792 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.157106 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.157342 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.157590 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.157918 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.158209 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.158452 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.158817 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.159075 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.159316 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.159515 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.159848 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.160071 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.160299 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.160542 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.160844 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.161232 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.161676 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.161944 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.162154 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.162474 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.162813 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.163451 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.163731 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.164216 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: E0123 12:58:29.756432 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:29 crc kubenswrapper[4865]: E0123 12:58:29.757745 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:29 crc kubenswrapper[4865]: E0123 12:58:29.759206 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:29 crc kubenswrapper[4865]: E0123 12:58:29.759233 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.885651 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" event={"ID":"dbfec6f5-80b4-480f-a958-c3107b2776c0","Type":"ContainerStarted","Data":"d9147b8bace7855a843e97a1bac103beaa6d491e6eb97174767cc7a9b715c786"} Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.888003 4865 generic.go:334] "Generic (PLEG): container finished" podID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerID="e7440e71b764fc1170b4e582df1fa0de60d00e2cc4d7348e19eb5ccc39b95a74" exitCode=1 Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.888057 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" event={"ID":"1959a742-ade2-4266-9a93-e96a1b6e3908","Type":"ContainerDied","Data":"e7440e71b764fc1170b4e582df1fa0de60d00e2cc4d7348e19eb5ccc39b95a74"} Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.888898 4865 scope.go:117] "RemoveContainer" containerID="e7440e71b764fc1170b4e582df1fa0de60d00e2cc4d7348e19eb5ccc39b95a74" Jan 23 12:58:29 crc kubenswrapper[4865]: E0123 12:58:29.889168 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.889371 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.889770 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.890066 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.890481 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.890992 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.891423 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.891643 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.891851 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.892007 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.892204 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.892412 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.892578 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.892787 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.892848 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-8lsbn_cfe7c397-99ae-494d-a418-b0f08568f156/console-operator/0.log" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.892887 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" event={"ID":"cfe7c397-99ae-494d-a418-b0f08568f156","Type":"ContainerStarted","Data":"7fa635d8424d3f66c94287f9ba0ad214fc331b0c607cab47353c96a11d4e376e"} Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.892952 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.893168 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.893419 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.893585 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.893758 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.893908 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.894057 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.894210 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.894383 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.894532 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.894697 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.894910 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.895258 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.895923 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.896238 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.896674 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.897077 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.897327 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.897622 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.897957 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.898182 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.898461 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.898900 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.899206 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.899462 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.899802 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.900174 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.900476 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.900797 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.901109 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.901424 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.901715 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.902090 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.902387 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.902662 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.903019 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.903307 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.903532 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.903890 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.904104 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.904315 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.904571 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.904998 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.905214 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.905455 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.905847 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.906106 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.906348 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:29 crc kubenswrapper[4865]: I0123 12:58:29.906551 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.232805 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.233333 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.233632 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.233682 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.457306 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.458381 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.458792 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.458836 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.488637 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.488691 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.551403 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.551488 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.779529 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.780681 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.781958 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.781993 4865 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.903484 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" event={"ID":"8e227974-40b8-4d16-8d5f-961b705a9740","Type":"ContainerStarted","Data":"f885d8e004cc28a105f22692ba41d19be021fcaf768af9b3403a43a9e72e86cd"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.904337 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.904582 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.904955 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.905375 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.905653 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.905966 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.906020 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.906221 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.906477 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.906742 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.906989 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.907296 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.907659 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.907832 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"029e74d157610779c3432d4dc05fbafe2978fe349566339d42ce705bb684d582"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.907894 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.908154 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.908389 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.908625 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.908818 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.908994 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.909244 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.909629 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.910090 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.910252 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" event={"ID":"840bd4e6-18da-498a-bd3a-d4e80c69ec70","Type":"ContainerStarted","Data":"0568b418c6f9feb5d4a3677e2dd6a1275b515c491d2a226d891f6dcd0c60ddd9"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.910421 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.910669 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.911033 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.911540 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.911789 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.912276 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.912556 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.912734 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" event={"ID":"50ab40ef-54b8-4392-89ad-6b73c346c225","Type":"ContainerStarted","Data":"f1df69b30890e3a8738d77d070db7246ddb3431f16931509cd8d4ea106e2c215"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.912873 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.913211 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.913546 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.913750 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.913959 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.914170 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.914422 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.914867 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.915206 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.915369 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" event={"ID":"e92ddc14-bdb6-4407-b8a3-047079030166","Type":"ContainerStarted","Data":"232be94353aac2e87626af7b68144c0253405f7ad62d2f7221de27a4f2375137"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.915477 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.915723 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.915967 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.916175 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.916403 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.916656 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.916890 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.917114 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.917318 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.917335 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" event={"ID":"8ef0fdaa-8086-467d-8106-5c6dec532dba","Type":"ContainerStarted","Data":"7b28b314a62253bab2dff6dd6dcdbd4bdcfa958e016c67c5de24c34342098c1a"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.917537 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.917741 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.917960 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.918232 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.918527 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.919010 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.919173 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" event={"ID":"5fb13a32-67c3-46b1-a0b8-573e941e6c7e","Type":"ContainerStarted","Data":"6c85179785689b31c63f8780ade170488f7194ec897911ab511ac9d07ded86b1"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.919279 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.919583 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.920117 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.920406 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.920920 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.921244 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.921502 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.922021 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" event={"ID":"843c383b-053f-42f5-88ce-7a216f5354a3","Type":"ContainerStarted","Data":"eab4cbda6d14d24be83659972215f2f801fab258cb4ad7de3a085c70e05d8d00"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.922105 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.922312 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.922503 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.930111 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" event={"ID":"4836de1a-4a0e-4d02-af0e-3408b4814ecf","Type":"ContainerStarted","Data":"c4b97db835d3d6e986a465f08412ca00cced0b0837e1f438d56d01cbd74d5217"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.933492 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" event={"ID":"661fbfd2-7d52-419a-943f-c57854d2306b","Type":"ContainerStarted","Data":"589ec817fa33d2945a77636e47bece82f5412244ceb74275ac90da7c251be8f4"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.939042 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" event={"ID":"60877fc9-78f8-4298-8104-8cd90e28d3bd","Type":"ContainerStarted","Data":"531e111fafa26b694fc58ed92230d273fd823d8287a4b5fa1ee16877358fe461"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.941398 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" event={"ID":"93194445-a021-4960-ab82-085f13cc959d","Type":"ContainerStarted","Data":"f632a0ec27e84458e4e6a53018ba24d615fb3557c7f27191234cdc3926b8f3a4"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.944297 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-szb9h" event={"ID":"3dee20a9-c14d-4a42-afb1-87d126996c56","Type":"ContainerStarted","Data":"07df3a1af2b9bce0fd1cfcba17c0038c1597e26cafddd2b98e53aadb8fdae6e7"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.947128 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerStarted","Data":"c6bcaf8ad19683b140c9c0ef03792fd1daa4e728dc17ee7c8d7fadfa8d25607c"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.949204 4865 generic.go:334] "Generic (PLEG): container finished" podID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" containerID="318fd508c84d3d6ab78b7cb8780af4462f79c7afaee925e3734567fc1e2961dd" exitCode=1 Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.949292 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"6083e716-8bbf-40bf-abdd-87e865a2f7ae","Type":"ContainerDied","Data":"318fd508c84d3d6ab78b7cb8780af4462f79c7afaee925e3734567fc1e2961dd"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.950252 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.950836 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.951786 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.952129 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.953372 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerStarted","Data":"8b6c74d2e551ed18d0e83b546e1c83a5fe9bf0ff237daee33f85aa343ab45a7d"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.956304 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" event={"ID":"967c3782-1bce-4145-8244-7650fe19dc22","Type":"ContainerStarted","Data":"954df5dd39f6d5ed839258a04ecc82954ab9e41d05f0cb9bba184f8fd069c651"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.962781 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"1f97d13aa3ce86a1d2a02f51ffbd89b438cecc4f57a86a864f771252de8c9b3f"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.964935 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" event={"ID":"d2f4bfa4-63e2-418a-b52a-75d2992af596","Type":"ContainerStarted","Data":"ffee0a65d3a9d4aaf1aaaa4f2d0daee9888f2045360ad40a337ab9bdd0bd24ba"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.965765 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.969432 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"38402f098fb0fdf7d253c902044f47f5ebfa219c671d799bbbbee8bc79a16874"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.972155 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/0.log" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.972636 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerStarted","Data":"d3495ed84a53f12e4007a0f99ec4d52ea21f2cbe622e4f903c019346c6618125"} Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.972711 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.972888 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.973136 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.973179 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.973432 4865 scope.go:117] "RemoveContainer" containerID="e7440e71b764fc1170b4e582df1fa0de60d00e2cc4d7348e19eb5ccc39b95a74" Jan 23 12:58:30 crc kubenswrapper[4865]: E0123 12:58:30.973695 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:58:30 crc kubenswrapper[4865]: I0123 12:58:30.986320 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.005877 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.030926 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.046554 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.066681 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.086527 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.105914 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.127746 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.147010 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: E0123 12:58:31.166821 4865 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-server-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/persistence-rabbitmq-server-0\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openstack/rabbitmq-server-0" volumeName="persistence" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.185818 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.206164 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.228455 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.246141 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.265881 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.286780 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.306178 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.329281 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.346203 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.369134 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.384341 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.385841 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.415165 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.426196 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.446765 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.467451 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.485974 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.506841 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.526951 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.546510 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.566242 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.586643 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.606839 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: E0123 12:58:31.608071 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="6.4s" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.626487 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.645875 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.665637 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.685985 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.706328 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.726067 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.737202 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.737250 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.737406 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.737454 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 12:58:31 crc kubenswrapper[4865]: I0123 12:58:31.745718 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.765616 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.789281 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.806404 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.831491 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.846742 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.865977 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.885895 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.905980 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.926192 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.946023 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.957801 4865 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" start-of-body= Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.957843 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": dial tcp 192.168.126.11:10259: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.966659 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.985768 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.986169 4865 generic.go:334] "Generic (PLEG): container finished" podID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerID="d9147b8bace7855a843e97a1bac103beaa6d491e6eb97174767cc7a9b715c786" exitCode=1 Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.986217 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" event={"ID":"dbfec6f5-80b4-480f-a958-c3107b2776c0","Type":"ContainerDied","Data":"d9147b8bace7855a843e97a1bac103beaa6d491e6eb97174767cc7a9b715c786"} Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.986627 4865 scope.go:117] "RemoveContainer" containerID="d9147b8bace7855a843e97a1bac103beaa6d491e6eb97174767cc7a9b715c786" Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:31.986983 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0)\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.988408 4865 generic.go:334] "Generic (PLEG): container finished" podID="c728912d-821c-4759-b175-3fd4324ad4f2" containerID="69363188c023ec037365e6462967a0eb9169a136bc3d2131e45cd5a55c949188" exitCode=0 Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.989335 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.989375 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.989657 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c728912d-821c-4759-b175-3fd4324ad4f2","Type":"ContainerDied","Data":"69363188c023ec037365e6462967a0eb9169a136bc3d2131e45cd5a55c949188"} Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.990350 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.990391 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.990463 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.990478 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.990742 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.990774 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.990774 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.990855 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7572/metrics\": dial tcp 10.217.0.47:7572: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.991260 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.991284 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.991322 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.992412 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:31.992438 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.006751 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.026489 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.045854 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.066455 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.086852 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.106450 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.125895 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.146576 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.166478 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.187821 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.206260 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.226332 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.246629 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.266545 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.286738 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.303997 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.304354 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.304622 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.304653 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.306006 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.326311 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.346344 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.365786 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.387185 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.409349 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.425812 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.446548 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.466109 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.485805 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.486918 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.506212 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.527053 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.546040 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.566885 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.586802 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.605729 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.626111 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.646047 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.666325 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.685847 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.706286 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.711512 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.712127 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.712568 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.712674 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.726669 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.746339 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.766189 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.785931 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.806538 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.826260 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.846151 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.862670 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" containerID="cri-o://6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82" gracePeriod=13 Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.863089 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.864101 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.864559 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.864616 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.866193 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.885927 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.905843 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.926990 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.946142 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.966256 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.985872 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:32 crc kubenswrapper[4865]: I0123 12:58:32.996719 4865 scope.go:117] "RemoveContainer" containerID="d9147b8bace7855a843e97a1bac103beaa6d491e6eb97174767cc7a9b715c786" Jan 23 12:58:32 crc kubenswrapper[4865]: E0123 12:58:32.996990 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0)\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.006546 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.026534 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.046477 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.066539 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.086861 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.106799 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.125839 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: E0123 12:58:33.130550 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:33 crc kubenswrapper[4865]: E0123 12:58:33.130854 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:33 crc kubenswrapper[4865]: E0123 12:58:33.131084 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:33 crc kubenswrapper[4865]: E0123 12:58:33.131142 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.145932 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.166320 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.185801 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.206436 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.225782 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.248930 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.266452 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.286271 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.306650 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.325888 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.346544 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.366234 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.385858 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.390396 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.390455 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.390583 4865 patch_prober.go:28] interesting pod/downloads-7954f5f757-48b72 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.390643 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-48b72" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.406377 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.426164 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.445827 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.461857 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.466070 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.486760 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.506113 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.526154 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.546342 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.550531 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.550778 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.550816 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.550938 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.551015 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.551102 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.551055 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.566734 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.585798 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.606080 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.625845 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.646590 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.666826 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.686572 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.706021 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.726461 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.746590 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.765985 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.773111 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.773173 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.776274 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.786131 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.792243 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.806409 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.826101 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.841034 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.841379 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.841434 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.841662 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.841684 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.841729 4865 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-42cdm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.841753 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.848236 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.867848 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.872056 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerName="registry" containerID="cri-o://a82cb642866d620e7e8da4c34411e1a4054fd3eb6ccb5d984ad3c250d3945b97" gracePeriod=30 Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.887463 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.887504 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.887518 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.887590 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.887802 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.887880 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.888022 4865 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-g5xkl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.888053 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.905778 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.926186 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.946134 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.966304 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:33 crc kubenswrapper[4865]: I0123 12:58:33.986242 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.006086 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.007244 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" event={"ID":"da1cf187-8918-46b4-ab33-e8912c9d0dd6","Type":"ContainerStarted","Data":"1f82bbc6562ef119de8d44283cf07658ee939e78d2e833cee725ec522543517b"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.007402 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.010893 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" start-of-body= Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.010938 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.012183 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" event={"ID":"b2ea2452-dc3b-4b93-a9d4-e562a63111c9","Type":"ContainerStarted","Data":"f6355aa5a5dace796906b30065a06ceac12ef8ccf3d9daab57b9c0896657f733"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.013100 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.018245 4865 generic.go:334] "Generic (PLEG): container finished" podID="661fbfd2-7d52-419a-943f-c57854d2306b" containerID="589ec817fa33d2945a77636e47bece82f5412244ceb74275ac90da7c251be8f4" exitCode=1 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.018315 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" event={"ID":"661fbfd2-7d52-419a-943f-c57854d2306b","Type":"ContainerDied","Data":"589ec817fa33d2945a77636e47bece82f5412244ceb74275ac90da7c251be8f4"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.018651 4865 scope.go:117] "RemoveContainer" containerID="589ec817fa33d2945a77636e47bece82f5412244ceb74275ac90da7c251be8f4" Jan 23 12:58:34 crc kubenswrapper[4865]: E0123 12:58:34.018852 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.020822 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" event={"ID":"0167f850-ba43-426a-8c56-aa171131e7da","Type":"ContainerStarted","Data":"5ab50c49bb504542d7db9345701205b0b89a3c8f45e8e144d3514ccb73b674a6"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.021371 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.025651 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.028781 4865 generic.go:334] "Generic (PLEG): container finished" podID="e92ddc14-bdb6-4407-b8a3-047079030166" containerID="232be94353aac2e87626af7b68144c0253405f7ad62d2f7221de27a4f2375137" exitCode=1 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.028843 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" event={"ID":"e92ddc14-bdb6-4407-b8a3-047079030166","Type":"ContainerDied","Data":"232be94353aac2e87626af7b68144c0253405f7ad62d2f7221de27a4f2375137"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.029435 4865 scope.go:117] "RemoveContainer" containerID="232be94353aac2e87626af7b68144c0253405f7ad62d2f7221de27a4f2375137" Jan 23 12:58:34 crc kubenswrapper[4865]: E0123 12:58:34.029663 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166)\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.030198 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" event={"ID":"6d4fbfc8-900e-4c44-a458-039d37a6dd40","Type":"ContainerStarted","Data":"8def9c24761c33c45159a6ec2ce99f5dd723a4647323c64aed86dad731e312d5"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.030645 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.032441 4865 generic.go:334] "Generic (PLEG): container finished" podID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerID="6c85179785689b31c63f8780ade170488f7194ec897911ab511ac9d07ded86b1" exitCode=1 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.032549 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" event={"ID":"5fb13a32-67c3-46b1-a0b8-573e941e6c7e","Type":"ContainerDied","Data":"6c85179785689b31c63f8780ade170488f7194ec897911ab511ac9d07ded86b1"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.033249 4865 scope.go:117] "RemoveContainer" containerID="6c85179785689b31c63f8780ade170488f7194ec897911ab511ac9d07ded86b1" Jan 23 12:58:34 crc kubenswrapper[4865]: E0123 12:58:34.033573 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-59dd8b7cbf-nppmq_openstack-operators(5fb13a32-67c3-46b1-a0b8-573e941e6c7e)\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.035121 4865 generic.go:334] "Generic (PLEG): container finished" podID="d2f4bfa4-63e2-418a-b52a-75d2992af596" containerID="ffee0a65d3a9d4aaf1aaaa4f2d0daee9888f2045360ad40a337ab9bdd0bd24ba" exitCode=1 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.035171 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" event={"ID":"d2f4bfa4-63e2-418a-b52a-75d2992af596","Type":"ContainerDied","Data":"ffee0a65d3a9d4aaf1aaaa4f2d0daee9888f2045360ad40a337ab9bdd0bd24ba"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.035497 4865 scope.go:117] "RemoveContainer" containerID="ffee0a65d3a9d4aaf1aaaa4f2d0daee9888f2045360ad40a337ab9bdd0bd24ba" Jan 23 12:58:34 crc kubenswrapper[4865]: E0123 12:58:34.035708 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.036994 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" event={"ID":"2c3366d9-565f-4601-acbb-b473dcfe126c","Type":"ContainerStarted","Data":"59753f8a9ac601813cf61722fa2f680aaa9360854df772d81a25c24ca3e9ccbd"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.037479 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.040306 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerStarted","Data":"9adc0594bef99650b792d62aa38c59284c841c5f1ebc4a1cf35ef37af13ea622"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.040786 4865 scope.go:117] "RemoveContainer" containerID="7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.042396 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" event={"ID":"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73","Type":"ContainerStarted","Data":"9b5e2623653f8096ac3aff4b822b14daf66396447038c4f8cf5cc198e1064fbb"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.043137 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.045264 4865 generic.go:334] "Generic (PLEG): container finished" podID="967c3782-1bce-4145-8244-7650fe19dc22" containerID="954df5dd39f6d5ed839258a04ecc82954ab9e41d05f0cb9bba184f8fd069c651" exitCode=1 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.045354 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" event={"ID":"967c3782-1bce-4145-8244-7650fe19dc22","Type":"ContainerDied","Data":"954df5dd39f6d5ed839258a04ecc82954ab9e41d05f0cb9bba184f8fd069c651"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.045772 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.046014 4865 scope.go:117] "RemoveContainer" containerID="954df5dd39f6d5ed839258a04ecc82954ab9e41d05f0cb9bba184f8fd069c651" Jan 23 12:58:34 crc kubenswrapper[4865]: E0123 12:58:34.046297 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.047084 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" event={"ID":"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e","Type":"ContainerStarted","Data":"f3aedc3e84f03b5a8e35205c0b6b4acbbbb14f3224c2a1e020ffd763c7603f98"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.047137 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.048918 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" event={"ID":"a9bb243e-e7c3-4f68-be35-d86fa049c570","Type":"ContainerStarted","Data":"225e8ea119b89ec53412b288a78504658e10536d73422bdffe3ca05d7a7e6596"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.049040 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.051063 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" event={"ID":"10627175-8e39-4799-bec7-c0b49b938a29","Type":"ContainerStarted","Data":"fd24bf374cb93bd1ac3be24ba239a5a2297119650e90d74686695ca9642f7f88"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.052542 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.055089 4865 generic.go:334] "Generic (PLEG): container finished" podID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerID="6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82" exitCode=0 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.055145 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" event={"ID":"a51b0d26-bdc8-433f-90e5-d90b9bd94373","Type":"ContainerDied","Data":"6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.057533 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" event={"ID":"9177b0d0-3ce7-40fe-8567-85cb8dd5227a","Type":"ContainerStarted","Data":"6e3b76caed0d76172727765da5704f1260f0f6ff0e355debf75064878c56078f"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.057800 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.059156 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/1.log" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.061106 4865 generic.go:334] "Generic (PLEG): container finished" podID="582f83b4-97dc-4f56-9879-c73fab80488a" containerID="8b6c74d2e551ed18d0e83b546e1c83a5fe9bf0ff237daee33f85aa343ab45a7d" exitCode=1 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.061214 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerDied","Data":"8b6c74d2e551ed18d0e83b546e1c83a5fe9bf0ff237daee33f85aa343ab45a7d"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.062640 4865 scope.go:117] "RemoveContainer" containerID="8b6c74d2e551ed18d0e83b546e1c83a5fe9bf0ff237daee33f85aa343ab45a7d" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.063413 4865 generic.go:334] "Generic (PLEG): container finished" podID="8e227974-40b8-4d16-8d5f-961b705a9740" containerID="f885d8e004cc28a105f22692ba41d19be021fcaf768af9b3403a43a9e72e86cd" exitCode=1 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.063461 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" event={"ID":"8e227974-40b8-4d16-8d5f-961b705a9740","Type":"ContainerDied","Data":"f885d8e004cc28a105f22692ba41d19be021fcaf768af9b3403a43a9e72e86cd"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.063834 4865 scope.go:117] "RemoveContainer" containerID="f885d8e004cc28a105f22692ba41d19be021fcaf768af9b3403a43a9e72e86cd" Jan 23 12:58:34 crc kubenswrapper[4865]: E0123 12:58:34.064021 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-fdkt9_openstack-operators(8e227974-40b8-4d16-8d5f-961b705a9740)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.067765 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.069064 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" event={"ID":"bdf8f14b-af0d-43cc-b624-7dab2879dc4b","Type":"ContainerStarted","Data":"82a5dc53e670de19adf070d15e3558500b1a04ef07b5381860ecdd360fb8e0fd"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.069153 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.071514 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerStarted","Data":"a5571cd178bb438261317ba38387e608ad50d4bb004aa2d11391f7a29dd99411"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.071858 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.118680 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.119237 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.121906 4865 generic.go:334] "Generic (PLEG): container finished" podID="78884295-a3de-4e00-bcc4-6a1627b50717" containerID="50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" exitCode=137 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.125768 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.125927 4865 generic.go:334] "Generic (PLEG): container finished" podID="5cf30925-0355-42db-9895-f23a97fca08e" containerID="7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" exitCode=137 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.134775 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"78884295-a3de-4e00-bcc4-6a1627b50717","Type":"ContainerDied","Data":"50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.134812 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5cf30925-0355-42db-9895-f23a97fca08e","Type":"ContainerDied","Data":"7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.134825 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" event={"ID":"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b","Type":"ContainerStarted","Data":"66b146c5353ddb9b635d97c53b489328a11e843c79525d2f1a00e177a906335e"} Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.134859 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.144859 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.144905 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.146622 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.168790 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.168831 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.168853 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.186884 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.206029 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.226281 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.245714 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.265941 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.287176 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.306318 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.326121 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.345933 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.355229 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.366292 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.386267 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.406479 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.426940 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.446219 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.466616 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.486423 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.506057 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.512622 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.526566 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.546717 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.563115 4865 scope.go:117] "RemoveContainer" containerID="62c21373c0eebc0a568adb43c80526621a4e95ed48f1f7ec1047e095cb2d1298" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.565983 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.585978 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.606262 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.626472 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.646183 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.665826 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.686450 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.706191 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.726866 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.745837 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.765883 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.776239 4865 kuberuntime_container.go:700] "PreStop hook not completed in grace period" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" containerID="cri-o://bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c" gracePeriod=30 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.776280 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" containerID="cri-o://bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c" gracePeriod=2 Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.787812 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.802672 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-szb9h" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.808205 4865 status_manager.go:851] "Failed to get status for pod" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/pods/csi-hostpathplugin-g7l9x\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: E0123 12:58:34.821066 4865 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 23 12:58:34 crc kubenswrapper[4865]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 23 12:58:34 crc kubenswrapper[4865]: + source /usr/local/bin/container-scripts/functions Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNBridge=br-int Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNRemote=tcp:localhost:6642 Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNEncapType=geneve Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNAvailabilityZones= Jan 23 12:58:34 crc kubenswrapper[4865]: ++ EnableChassisAsGateway=true Jan 23 12:58:34 crc kubenswrapper[4865]: ++ PhysicalNetworks= Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNHostName= Jan 23 12:58:34 crc kubenswrapper[4865]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 23 12:58:34 crc kubenswrapper[4865]: ++ ovs_dir=/var/lib/openvswitch Jan 23 12:58:34 crc kubenswrapper[4865]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 23 12:58:34 crc kubenswrapper[4865]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 23 12:58:34 crc kubenswrapper[4865]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-spv64" message=< Jan 23 12:58:34 crc kubenswrapper[4865]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 23 12:58:34 crc kubenswrapper[4865]: + source /usr/local/bin/container-scripts/functions Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNBridge=br-int Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNRemote=tcp:localhost:6642 Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNEncapType=geneve Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNAvailabilityZones= Jan 23 12:58:34 crc kubenswrapper[4865]: ++ EnableChassisAsGateway=true Jan 23 12:58:34 crc kubenswrapper[4865]: ++ PhysicalNetworks= Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNHostName= Jan 23 12:58:34 crc kubenswrapper[4865]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 23 12:58:34 crc kubenswrapper[4865]: ++ ovs_dir=/var/lib/openvswitch Jan 23 12:58:34 crc kubenswrapper[4865]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 23 12:58:34 crc kubenswrapper[4865]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 23 12:58:34 crc kubenswrapper[4865]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: > Jan 23 12:58:34 crc kubenswrapper[4865]: E0123 12:58:34.821262 4865 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 23 12:58:34 crc kubenswrapper[4865]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 23 12:58:34 crc kubenswrapper[4865]: + source /usr/local/bin/container-scripts/functions Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNBridge=br-int Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNRemote=tcp:localhost:6642 Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNEncapType=geneve Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNAvailabilityZones= Jan 23 12:58:34 crc kubenswrapper[4865]: ++ EnableChassisAsGateway=true Jan 23 12:58:34 crc kubenswrapper[4865]: ++ PhysicalNetworks= Jan 23 12:58:34 crc kubenswrapper[4865]: ++ OVNHostName= Jan 23 12:58:34 crc kubenswrapper[4865]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 23 12:58:34 crc kubenswrapper[4865]: ++ ovs_dir=/var/lib/openvswitch Jan 23 12:58:34 crc kubenswrapper[4865]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 23 12:58:34 crc kubenswrapper[4865]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 23 12:58:34 crc kubenswrapper[4865]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 23 12:58:34 crc kubenswrapper[4865]: + sleep 0.5 Jan 23 12:58:34 crc kubenswrapper[4865]: > pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" containerID="cri-o://bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.826724 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.849000 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.866356 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.886230 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.887808 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.887850 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.888721 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.888799 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.912015 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.929002 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.946369 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.965638 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:34 crc kubenswrapper[4865]: I0123 12:58:34.985790 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.006184 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.026351 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.045532 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.066394 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.087295 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.106039 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.125638 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.144988 4865 generic.go:334] "Generic (PLEG): container finished" podID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerID="a82cb642866d620e7e8da4c34411e1a4054fd3eb6ccb5d984ad3c250d3945b97" exitCode=0 Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.145047 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" event={"ID":"a2830362-05e6-4a49-887e-cf3d25cf65a4","Type":"ContainerDied","Data":"a82cb642866d620e7e8da4c34411e1a4054fd3eb6ccb5d984ad3c250d3945b97"} Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.150536 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.151097 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-spv64_11be7549-5b2b-49e9-b11e-7035922b3673/ovsdb-server/0.log" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.151897 4865 generic.go:334] "Generic (PLEG): container finished" podID="11be7549-5b2b-49e9-b11e-7035922b3673" containerID="bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c" exitCode=143 Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.151959 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-spv64" event={"ID":"11be7549-5b2b-49e9-b11e-7035922b3673","Type":"ContainerDied","Data":"bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c"} Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.155642 4865 generic.go:334] "Generic (PLEG): container finished" podID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerID="6e3b76caed0d76172727765da5704f1260f0f6ff0e355debf75064878c56078f" exitCode=1 Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.155691 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" event={"ID":"9177b0d0-3ce7-40fe-8567-85cb8dd5227a","Type":"ContainerDied","Data":"6e3b76caed0d76172727765da5704f1260f0f6ff0e355debf75064878c56078f"} Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.156274 4865 scope.go:117] "RemoveContainer" containerID="6e3b76caed0d76172727765da5704f1260f0f6ff0e355debf75064878c56078f" Jan 23 12:58:35 crc kubenswrapper[4865]: E0123 12:58:35.156557 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=webhook-server pod=metallb-operator-webhook-server-78f5776895-s7hqg_metallb-system(9177b0d0-3ce7-40fe-8567-85cb8dd5227a)\"" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.163346 4865 generic.go:334] "Generic (PLEG): container finished" podID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerID="bd719900c8142a7d20c2f2d0218496dbcd37cde9dab823d7260847f6749c0bcb" exitCode=1 Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.163468 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" event={"ID":"429b62c2-b748-40b1-b00f-a1b0488fc5d0","Type":"ContainerDied","Data":"bd719900c8142a7d20c2f2d0218496dbcd37cde9dab823d7260847f6749c0bcb"} Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.164132 4865 scope.go:117] "RemoveContainer" containerID="bd719900c8142a7d20c2f2d0218496dbcd37cde9dab823d7260847f6749c0bcb" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.169075 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.187704 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.188981 4865 generic.go:334] "Generic (PLEG): container finished" podID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerID="a5571cd178bb438261317ba38387e608ad50d4bb004aa2d11391f7a29dd99411" exitCode=1 Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.189047 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerDied","Data":"a5571cd178bb438261317ba38387e608ad50d4bb004aa2d11391f7a29dd99411"} Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.189854 4865 scope.go:117] "RemoveContainer" containerID="a5571cd178bb438261317ba38387e608ad50d4bb004aa2d11391f7a29dd99411" Jan 23 12:58:35 crc kubenswrapper[4865]: E0123 12:58:35.190171 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller pod=controller-6968d8fdc4-8bjkz_metallb-system(3685d2b2-151b-479a-92c1-ae400eacd1b9)\"" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.205727 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.210307 4865 generic.go:334] "Generic (PLEG): container finished" podID="0167f850-ba43-426a-8c56-aa171131e7da" containerID="5ab50c49bb504542d7db9345701205b0b89a3c8f45e8e144d3514ccb73b674a6" exitCode=1 Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.210433 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" event={"ID":"0167f850-ba43-426a-8c56-aa171131e7da","Type":"ContainerDied","Data":"5ab50c49bb504542d7db9345701205b0b89a3c8f45e8e144d3514ccb73b674a6"} Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.211266 4865 scope.go:117] "RemoveContainer" containerID="5ab50c49bb504542d7db9345701205b0b89a3c8f45e8e144d3514ccb73b674a6" Jan 23 12:58:35 crc kubenswrapper[4865]: E0123 12:58:35.211529 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da)\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.231303 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.241130 4865 generic.go:334] "Generic (PLEG): container finished" podID="93194445-a021-4960-ab82-085f13cc959d" containerID="f632a0ec27e84458e4e6a53018ba24d615fb3557c7f27191234cdc3926b8f3a4" exitCode=1 Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.241350 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" event={"ID":"93194445-a021-4960-ab82-085f13cc959d","Type":"ContainerDied","Data":"f632a0ec27e84458e4e6a53018ba24d615fb3557c7f27191234cdc3926b8f3a4"} Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.242757 4865 scope.go:117] "RemoveContainer" containerID="f632a0ec27e84458e4e6a53018ba24d615fb3557c7f27191234cdc3926b8f3a4" Jan 23 12:58:35 crc kubenswrapper[4865]: E0123 12:58:35.243099 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.247062 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.248943 4865 generic.go:334] "Generic (PLEG): container finished" podID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerID="7b28b314a62253bab2dff6dd6dcdbd4bdcfa958e016c67c5de24c34342098c1a" exitCode=1 Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.249000 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" event={"ID":"8ef0fdaa-8086-467d-8106-5c6dec532dba","Type":"ContainerDied","Data":"7b28b314a62253bab2dff6dd6dcdbd4bdcfa958e016c67c5de24c34342098c1a"} Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.250546 4865 scope.go:117] "RemoveContainer" containerID="7b28b314a62253bab2dff6dd6dcdbd4bdcfa958e016c67c5de24c34342098c1a" Jan 23 12:58:35 crc kubenswrapper[4865]: E0123 12:58:35.250901 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba)\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.265733 4865 generic.go:334] "Generic (PLEG): container finished" podID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerID="c6bcaf8ad19683b140c9c0ef03792fd1daa4e728dc17ee7c8d7fadfa8d25607c" exitCode=1 Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.266206 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerDied","Data":"c6bcaf8ad19683b140c9c0ef03792fd1daa4e728dc17ee7c8d7fadfa8d25607c"} Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.273326 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.275891 4865 scope.go:117] "RemoveContainer" containerID="c6bcaf8ad19683b140c9c0ef03792fd1daa4e728dc17ee7c8d7fadfa8d25607c" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.295122 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.308727 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.327663 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.346629 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.368413 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.386687 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.406993 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.427762 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.446088 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.467133 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.486261 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.510685 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.514666 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.514709 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.531642 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.547455 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.566223 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.586199 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.606240 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.626475 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.645957 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.666192 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.686408 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.707243 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.725995 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.746862 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.767225 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.786714 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.806664 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.826908 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.846718 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.866702 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.886300 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.906587 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.926026 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.945816 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.968195 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:35 crc kubenswrapper[4865]: I0123 12:58:35.986518 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.006667 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.026554 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.046117 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.066151 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.086145 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.097992 4865 scope.go:117] "RemoveContainer" containerID="34c12403002230bf2149bbd73d264e2d87708fb1feba635b4fb8637cfcefe7d5" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.106127 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.132078 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.146490 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.165716 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.187503 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.208929 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.217247 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.217891 4865 scope.go:117] "RemoveContainer" containerID="6c85179785689b31c63f8780ade170488f7194ec897911ab511ac9d07ded86b1" Jan 23 12:58:36 crc kubenswrapper[4865]: E0123 12:58:36.218086 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-59dd8b7cbf-nppmq_openstack-operators(5fb13a32-67c3-46b1-a0b8-573e941e6c7e)\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.226112 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.245736 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.266755 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.286168 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.302620 4865 scope.go:117] "RemoveContainer" containerID="a5571cd178bb438261317ba38387e608ad50d4bb004aa2d11391f7a29dd99411" Jan 23 12:58:36 crc kubenswrapper[4865]: E0123 12:58:36.302847 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller pod=controller-6968d8fdc4-8bjkz_metallb-system(3685d2b2-151b-479a-92c1-ae400eacd1b9)\"" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.303517 4865 scope.go:117] "RemoveContainer" containerID="6e3b76caed0d76172727765da5704f1260f0f6ff0e355debf75064878c56078f" Jan 23 12:58:36 crc kubenswrapper[4865]: E0123 12:58:36.303761 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=webhook-server pod=metallb-operator-webhook-server-78f5776895-s7hqg_metallb-system(9177b0d0-3ce7-40fe-8567-85cb8dd5227a)\"" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.305051 4865 scope.go:117] "RemoveContainer" containerID="5ab50c49bb504542d7db9345701205b0b89a3c8f45e8e144d3514ccb73b674a6" Jan 23 12:58:36 crc kubenswrapper[4865]: E0123 12:58:36.305207 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da)\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.305734 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.326564 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.346082 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.366330 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.384001 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.388201 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.411728 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.426219 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.445722 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.466335 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.488913 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.506484 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.515084 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.515165 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.526337 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.545653 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.570053 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.586440 4865 status_manager.go:851] "Failed to get status for pod" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/pods/csi-hostpathplugin-g7l9x\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.608112 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.618720 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.619341 4865 scope.go:117] "RemoveContainer" containerID="954df5dd39f6d5ed839258a04ecc82954ab9e41d05f0cb9bba184f8fd069c651" Jan 23 12:58:36 crc kubenswrapper[4865]: E0123 12:58:36.619543 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.627076 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.630418 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8081/readyz\": dial tcp 10.217.0.222:8081: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.646524 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.666754 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.686292 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.700489 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.701328 4865 scope.go:117] "RemoveContainer" containerID="232be94353aac2e87626af7b68144c0253405f7ad62d2f7221de27a4f2375137" Jan 23 12:58:36 crc kubenswrapper[4865]: E0123 12:58:36.701643 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166)\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.707002 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.727777 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.744417 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.745383 4865 scope.go:117] "RemoveContainer" containerID="ffee0a65d3a9d4aaf1aaaa4f2d0daee9888f2045360ad40a337ab9bdd0bd24ba" Jan 23 12:58:36 crc kubenswrapper[4865]: E0123 12:58:36.748748 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.749321 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.766438 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.786478 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.807388 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.821939 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.826554 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.845500 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.845720 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.846207 4865 scope.go:117] "RemoveContainer" containerID="e7440e71b764fc1170b4e582df1fa0de60d00e2cc4d7348e19eb5ccc39b95a74" Jan 23 12:58:36 crc kubenswrapper[4865]: E0123 12:58:36.846522 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.866384 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.886194 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.906039 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.921719 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.922557 4865 scope.go:117] "RemoveContainer" containerID="f632a0ec27e84458e4e6a53018ba24d615fb3557c7f27191234cdc3926b8f3a4" Jan 23 12:58:36 crc kubenswrapper[4865]: E0123 12:58:36.922990 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.926641 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.946070 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.966479 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:36 crc kubenswrapper[4865]: I0123 12:58:36.986361 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.006440 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.026741 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.046892 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.057629 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.057694 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.065657 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.086497 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.106081 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.126535 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.146282 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.160377 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c is running failed: container process not found" containerID="bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.160696 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c is running failed: container process not found" containerID="bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.161046 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c is running failed: container process not found" containerID="bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.161086 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.166673 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.185885 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.205996 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.225779 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.245988 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.265915 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.286753 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.295807 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.296814 4865 scope.go:117] "RemoveContainer" containerID="7b28b314a62253bab2dff6dd6dcdbd4bdcfa958e016c67c5de24c34342098c1a" Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.297199 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba)\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.299730 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.300432 4865 scope.go:117] "RemoveContainer" containerID="589ec817fa33d2945a77636e47bece82f5412244ceb74275ac90da7c251be8f4" Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.300691 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.304389 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.304730 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.305180 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.305207 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.306752 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.325822 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.346315 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.366190 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.385691 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.408721 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.425738 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.445557 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: E0123 12:58:37.449493 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event=< Jan 23 12:58:37 crc kubenswrapper[4865]: &Event{ObjectMeta:{catalog-operator-68c6474976-42cdm.188d5d87792a6b03 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:catalog-operator-68c6474976-42cdm,UID:843c383b-053f-42f5-88ce-7a216f5354a3,APIVersion:v1,ResourceVersion:27211,FieldPath:spec.containers{catalog-operator},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.217.0.26:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 23 12:58:37 crc kubenswrapper[4865]: body: Jan 23 12:58:37 crc kubenswrapper[4865]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 12:57:45.061264131 +0000 UTC m=+3909.230336357,LastTimestamp:2026-01-23 12:57:45.061264131 +0000 UTC m=+3909.230336357,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 12:58:37 crc kubenswrapper[4865]: > Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.465652 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.485756 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.505865 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.525572 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.545711 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.552853 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.552897 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.553255 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.553281 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.566188 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.579749 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.582050 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.585677 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.605892 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.626565 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.646564 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.666975 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.685969 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.706674 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.725184 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.726029 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.745975 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.766471 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.786519 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.809102 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.826329 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.846407 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.866065 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.886634 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.906278 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.926694 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.945921 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.966915 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:37 crc kubenswrapper[4865]: I0123 12:58:37.986492 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.006559 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: E0123 12:58:38.009011 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="7s" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.026342 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.046356 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.066070 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.086133 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.105907 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.125916 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.147043 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.167950 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.186684 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.205766 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.226833 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.246373 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.265957 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.287331 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.306296 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.335499 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.345906 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.366449 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.385941 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.406424 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.426429 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.446444 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.465892 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.486073 4865 status_manager.go:851] "Failed to get status for pod" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/pods/csi-hostpathplugin-g7l9x\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.506229 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.528486 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.546863 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.566845 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.586089 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.605589 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.625780 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.646253 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.666574 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.686123 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.686192 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.686425 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.706144 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.726677 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.746374 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.766573 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.786791 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.806403 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.826223 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.846159 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.867779 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.886115 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.906032 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.926454 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.946926 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.966416 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:38 crc kubenswrapper[4865]: I0123 12:58:38.986453 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.006018 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.026028 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.046675 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.066505 4865 status_manager.go:851] "Failed to get status for pod" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/pods/csi-hostpathplugin-g7l9x\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.086386 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.106109 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.126525 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.144927 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.145246 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.145798 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.163870 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": dial tcp 10.217.0.87:8081: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.166483 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.186208 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.206041 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.226842 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.246717 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.265697 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.286962 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.306303 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.326221 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.345757 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.366159 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.385932 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.406036 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.426502 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.446278 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.467235 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.485958 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.506581 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.525751 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.546290 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.566112 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.586785 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.605963 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.626806 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.646577 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.667169 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.686231 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.705710 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.726176 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.746271 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: E0123 12:58:39.752202 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d is running failed: container process not found" containerID="50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:39 crc kubenswrapper[4865]: E0123 12:58:39.752856 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d is running failed: container process not found" containerID="50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:39 crc kubenswrapper[4865]: E0123 12:58:39.753369 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d is running failed: container process not found" containerID="50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:39 crc kubenswrapper[4865]: E0123 12:58:39.753461 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 50687737949fed0a6b770d30725a645b50601a0cea614ddbe7928a63d4e0d04d is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" containerName="galera" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.765908 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.786084 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.806078 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.826719 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.845592 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.866359 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.886497 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:39 crc kubenswrapper[4865]: I0123 12:58:39.905632 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.232196 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.232872 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.233157 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" containerID="68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.233225 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 68c10641fc116f57bbeff371626d7d644247ec197a5ba11a6a96953bc50f36ec is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" Jan 23 12:58:40 crc kubenswrapper[4865]: I0123 12:58:40.439829 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.457424 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.458070 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.458678 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" containerID="d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.458805 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d82156684fd2437a0680fdc0b7f87ffb1876e915a374a4886f74c9f84eec7cff is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" Jan 23 12:58:40 crc kubenswrapper[4865]: I0123 12:58:40.488553 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 12:58:40 crc kubenswrapper[4865]: I0123 12:58:40.488956 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 12:58:40 crc kubenswrapper[4865]: I0123 12:58:40.551023 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:40 crc kubenswrapper[4865]: I0123 12:58:40.551363 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:40 crc kubenswrapper[4865]: I0123 12:58:40.551490 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:58:40 crc kubenswrapper[4865]: I0123 12:58:40.552372 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"d3495ed84a53f12e4007a0f99ec4d52ea21f2cbe622e4f903c019346c6618125"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 23 12:58:40 crc kubenswrapper[4865]: I0123 12:58:40.552514 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" containerID="cri-o://d3495ed84a53f12e4007a0f99ec4d52ea21f2cbe622e4f903c019346c6618125" gracePeriod=30 Jan 23 12:58:40 crc kubenswrapper[4865]: I0123 12:58:40.552932 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:40 crc kubenswrapper[4865]: I0123 12:58:40.553112 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.778201 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3 is running failed: container process not found" containerID="7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.778508 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3 is running failed: container process not found" containerID="7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.778848 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3 is running failed: container process not found" containerID="7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 23 12:58:40 crc kubenswrapper[4865]: E0123 12:58:40.778879 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7dab53c7d7f40513d325efc315ff43b6913a70fe9971a21f5c0b527910640ce3 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="5cf30925-0355-42db-9895-f23a97fca08e" containerName="galera" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.118450 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:58:41 crc kubenswrapper[4865]: E0123 12:58:41.118963 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.228322 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.229731 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.230430 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.231564 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.233057 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": read tcp 10.217.0.2:60082->10.217.0.11:8443: read: connection reset by peer" start-of-body= Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.233133 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": read tcp 10.217.0.2:60082->10.217.0.11:8443: read: connection reset by peer" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.234026 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.234382 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.234690 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.234972 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.235315 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.235787 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.236267 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.236720 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.237123 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.237547 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.238030 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.238568 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.239077 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.241395 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.241771 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.242035 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.242389 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.243941 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.245005 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.245539 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.248160 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.248429 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.248779 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.249153 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.249503 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.249972 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.250423 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.251580 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.252191 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.252564 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.252839 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.253078 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.253320 4865 status_manager.go:851] "Failed to get status for pod" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/pods/csi-hostpathplugin-g7l9x\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.253571 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.253823 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.254056 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.254283 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.254529 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.254749 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.255031 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.255661 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.256014 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.256211 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.256476 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.256773 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.257157 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.257458 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.259040 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.259259 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.259462 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.259676 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.259875 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.260068 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.260250 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.260479 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.261184 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.261446 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.261738 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.261939 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.262133 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.262319 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.262505 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.377449 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.378018 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.378258 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.378482 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.378724 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.378976 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.379186 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.379410 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.379637 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.384119 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.387013 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.405987 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.426010 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.445876 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.466524 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.485899 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.506424 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.526303 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.546173 4865 status_manager.go:851] "Failed to get status for pod" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/pods/csi-hostpathplugin-g7l9x\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.566037 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.586168 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.605965 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.626272 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.646381 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.666248 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.686322 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.718315 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.726042 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.746388 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.767019 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.786445 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.806050 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.827821 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.846566 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.866150 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.890314 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.906325 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.926438 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.946014 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.966243 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:41 crc kubenswrapper[4865]: I0123 12:58:41.985878 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.006365 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.010418 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.026444 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.046458 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.066042 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.086730 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.106718 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.126483 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.145997 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.162465 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c is running failed: container process not found" containerID="bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.163027 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c is running failed: container process not found" containerID="bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.163927 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c is running failed: container process not found" containerID="bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.164049 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of bebd31b76993ae6c0f02026413adcf5914b7c0a305c05c2812351f6abc51f48c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-spv64" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" containerName="ovsdb-server" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.166347 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.186519 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.206814 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.226535 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.229636 4865 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-sgsqx container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": dial tcp 10.217.0.70:5000: connect: connection refused" start-of-body= Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.229724 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": dial tcp 10.217.0.70:5000: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.246517 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.266258 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.286701 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.304317 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.304942 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.305227 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" containerID="fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8" cmd=["/usr/local/bin/container-scripts/ovn_controller_readiness.sh"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.305263 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc2840982d3003e453a3177351585232fec53b55f689496bd4a3ce30ef3750d8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-hz4vm" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" containerName="ovn-controller" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.305957 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.310790 4865 scope.go:117] "RemoveContainer" containerID="129bfde63977859660c6eb3aa9e50a03c29e7268576ca70bbc6f2ad00f8febc8" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.326144 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.346772 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.367881 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.383886 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d7d54b946-29gbz_9e2332f2-6e3b-4355-9af1-24a8980c7d8a/console/0.log" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.383982 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d54b946-29gbz" event={"ID":"9e2332f2-6e3b-4355-9af1-24a8980c7d8a","Type":"ContainerStarted","Data":"8c921e1c243082f9ea494c8b5dd7b75963b480b297a3069c9ad1b1dae4bb2944"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.385788 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.386451 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-swk7h_3fbcdfcf-19cc-46b9-a986-bd9426751459/router/0.log" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.386588 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-swk7h" event={"ID":"3fbcdfcf-19cc-46b9-a986-bd9426751459","Type":"ContainerStarted","Data":"fa79f5adba297183e0e7a52de4acbe1da6c78bb80acfb40e132466e23df1908e"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.388457 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerStarted","Data":"ae1508f276d444032c118fb0b67f9e5568656f5e50cd128fa88fdc40396b41ce"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.391053 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerStarted","Data":"e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.392671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hz4vm" event={"ID":"d8331842-a45a-4cbf-a55b-0d8dde7f69eb","Type":"ContainerStarted","Data":"bd42e3caf1c437284ee62adc434e1bd85fedafa82467a94f85ab06f308de9fd7"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.392866 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hz4vm" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.395225 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c728912d-821c-4759-b175-3fd4324ad4f2","Type":"ContainerStarted","Data":"c74f0d480b376c78be8098a723dd25f8263d240a2395a6275f1e9ce7a869a41f"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.398887 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5a005b74805dc9905db734258efa56f0a20d88333173f9257c3d100c1dbcaf21"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.400707 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" event={"ID":"429b62c2-b748-40b1-b00f-a1b0488fc5d0","Type":"ContainerStarted","Data":"f6d9b4b3d5c12dd18a1e548634a2f2a1a036af095d890d878feed5bd34197f18"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.402408 4865 generic.go:334] "Generic (PLEG): container finished" podID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerID="82a5dc53e670de19adf070d15e3558500b1a04ef07b5381860ecdd360fb8e0fd" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.402464 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" event={"ID":"bdf8f14b-af0d-43cc-b624-7dab2879dc4b","Type":"ContainerDied","Data":"82a5dc53e670de19adf070d15e3558500b1a04ef07b5381860ecdd360fb8e0fd"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.403576 4865 scope.go:117] "RemoveContainer" containerID="82a5dc53e670de19adf070d15e3558500b1a04ef07b5381860ecdd360fb8e0fd" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.403991 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-69cf5d4557-9jp5b_openstack-operators(bdf8f14b-af0d-43cc-b624-7dab2879dc4b)\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.404834 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" event={"ID":"a51b0d26-bdc8-433f-90e5-d90b9bd94373","Type":"ContainerStarted","Data":"b328d9b54b4bb04befe9c7fb9488ed21d048a4b9e5592f701a8c415ab5bad0a2"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.405793 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.406726 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/1.log" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.407128 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerStarted","Data":"a731dd1f940b77d2d471bc77ff3834d9ed1c0aaa7dc63e059f42afa2cba767ec"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.408905 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/1.log" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.409218 4865 generic.go:334] "Generic (PLEG): container finished" podID="843c383b-053f-42f5-88ce-7a216f5354a3" containerID="eab4cbda6d14d24be83659972215f2f801fab258cb4ad7de3a085c70e05d8d00" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.409262 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" event={"ID":"843c383b-053f-42f5-88ce-7a216f5354a3","Type":"ContainerDied","Data":"eab4cbda6d14d24be83659972215f2f801fab258cb4ad7de3a085c70e05d8d00"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.409783 4865 scope.go:117] "RemoveContainer" containerID="eab4cbda6d14d24be83659972215f2f801fab258cb4ad7de3a085c70e05d8d00" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.416842 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhd4g" event={"ID":"c9ae9da8-9e6d-44ba-82c9-9842698cfa4f","Type":"ContainerStarted","Data":"3662679d02531f75a7d196e96da483e278321c9b60f18f9da04542a1cf394b94"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.421454 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerStarted","Data":"cbd56c19fce379136487197a6ca7b7c1c495ccc639974ad450c02ab717ac42c7"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.424529 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" containerID="66b146c5353ddb9b635d97c53b489328a11e843c79525d2f1a00e177a906335e" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.424607 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" event={"ID":"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b","Type":"ContainerDied","Data":"66b146c5353ddb9b635d97c53b489328a11e843c79525d2f1a00e177a906335e"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.425187 4865 scope.go:117] "RemoveContainer" containerID="66b146c5353ddb9b635d97c53b489328a11e843c79525d2f1a00e177a906335e" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.425428 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-7df9698d5d-lk94b_metallb-system(d1a0503d-3fc4-45b6-87c0-7af4a7246a4b)\"" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.425950 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.427227 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" event={"ID":"a2830362-05e6-4a49-887e-cf3d25cf65a4","Type":"ContainerStarted","Data":"018606cb506a2cb53818c7e20e4decd7bd2d2a569062921b16e727e37efb45a8"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.429581 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" event={"ID":"97f32b90-08dc-4333-95e6-a2e85648931f","Type":"ContainerStarted","Data":"709ba88ec5f1689921e2182bf64d2a616f8f0991f5ebadd01e11204a934302f1"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.431624 4865 generic.go:334] "Generic (PLEG): container finished" podID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerID="1f82bbc6562ef119de8d44283cf07658ee939e78d2e833cee725ec522543517b" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.431675 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" event={"ID":"da1cf187-8918-46b4-ab33-e8912c9d0dd6","Type":"ContainerDied","Data":"1f82bbc6562ef119de8d44283cf07658ee939e78d2e833cee725ec522543517b"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.432316 4865 scope.go:117] "RemoveContainer" containerID="1f82bbc6562ef119de8d44283cf07658ee939e78d2e833cee725ec522543517b" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.432535 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.434441 4865 generic.go:334] "Generic (PLEG): container finished" podID="a9bb243e-e7c3-4f68-be35-d86fa049c570" containerID="225e8ea119b89ec53412b288a78504658e10536d73422bdffe3ca05d7a7e6596" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.434546 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" event={"ID":"a9bb243e-e7c3-4f68-be35-d86fa049c570","Type":"ContainerDied","Data":"225e8ea119b89ec53412b288a78504658e10536d73422bdffe3ca05d7a7e6596"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.435575 4865 scope.go:117] "RemoveContainer" containerID="225e8ea119b89ec53412b288a78504658e10536d73422bdffe3ca05d7a7e6596" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.435974 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.439527 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tqvjg" event={"ID":"67ef4926-eb81-4d83-a9a1-4b7e9035892f","Type":"ContainerStarted","Data":"698abc3d5fa91f08ddd99cf5d4a3e1541c08e123add7341477c72b748e151684"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.442079 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cb0a89a-49f9-4a31-9cec-669e88882018","Type":"ContainerStarted","Data":"49ae166ad84882a23ad44fef712d0badebf178470d9efe058ee94bbc08cd4ec3"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.445499 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwxxg" event={"ID":"2bcb4671-0b01-435d-aa4b-b9596654bfff","Type":"ContainerStarted","Data":"2f2c69ff0f641f6c45097eb8598d5fd1c85c9265ebd7f87fd99849105c7e12c2"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.446037 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.447928 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" event={"ID":"15434cef-8cb6-4386-b761-143f1819cac8","Type":"ContainerStarted","Data":"cb91c750d12981120827f7b542090517af3ea3a9ede28ff7cd23321b1eb4911e"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.450568 4865 generic.go:334] "Generic (PLEG): container finished" podID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" containerID="9b5e2623653f8096ac3aff4b822b14daf66396447038c4f8cf5cc198e1064fbb" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.450624 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" event={"ID":"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73","Type":"ContainerDied","Data":"9b5e2623653f8096ac3aff4b822b14daf66396447038c4f8cf5cc198e1064fbb"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.451274 4865 scope.go:117] "RemoveContainer" containerID="9b5e2623653f8096ac3aff4b822b14daf66396447038c4f8cf5cc198e1064fbb" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.451548 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.452793 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" event={"ID":"c8896518-4b5b-4712-9994-0bb445a3504f","Type":"ContainerStarted","Data":"33ac499f470506c966c92393cc774dff3e96da3fe666e63559c6a8d2737f9c79"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.457831 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerStarted","Data":"d1dcaba699a08e73d448a396063bd12ecc6334242e3ffa33fd02a518ec5c09fe"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.460464 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerStarted","Data":"d4e6d818c5e068d51936524f311b0a4ea0a416bc4ab3fabeff119dbfad8a049e"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.462830 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"78884295-a3de-4e00-bcc4-6a1627b50717","Type":"ContainerStarted","Data":"2cc0245f1d15eff96049d34cc06dcb933813705c786542b39b4a5473f45c5b52"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.465017 4865 generic.go:334] "Generic (PLEG): container finished" podID="10627175-8e39-4799-bec7-c0b49b938a29" containerID="fd24bf374cb93bd1ac3be24ba239a5a2297119650e90d74686695ca9642f7f88" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.465066 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" event={"ID":"10627175-8e39-4799-bec7-c0b49b938a29","Type":"ContainerDied","Data":"fd24bf374cb93bd1ac3be24ba239a5a2297119650e90d74686695ca9642f7f88"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.465591 4865 scope.go:117] "RemoveContainer" containerID="fd24bf374cb93bd1ac3be24ba239a5a2297119650e90d74686695ca9642f7f88" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.465831 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-b45d7bf98-4c94z_openstack-operators(10627175-8e39-4799-bec7-c0b49b938a29)\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.466233 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.469262 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-spv64_11be7549-5b2b-49e9-b11e-7035922b3673/ovsdb-server/0.log" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.469549 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-spv64" event={"ID":"11be7549-5b2b-49e9-b11e-7035922b3673","Type":"ContainerStarted","Data":"5a86a1d43e85b38b6993b5ec654fe9f013c9f0d6db19771e28f5d3b0f28b70f4"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.472664 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hh6cp" event={"ID":"14894ab1-ecfc-4a37-a4f3-bc526eb55ce2","Type":"ContainerStarted","Data":"4ab1d0c9fd5aea0d515c152abba5169a72bac9e5048f511526e805c225aef58f"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.474921 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerStarted","Data":"9d6ae669bfddc36d31dd962756be7302311916cffdc4eaaa27943b7ba6a5ee53"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.476454 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"5cf30925-0355-42db-9895-f23a97fca08e","Type":"ContainerStarted","Data":"2c074eedc46f6b5919e6478c186ab90ac4f3e4843c5b6d61cb12893e1bcfa394"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.477852 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.478169 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7c6b50247bc06e9d84e4254173bbd060deaec45f9a0c4411f23278aefc68da4e"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.479360 4865 generic.go:334] "Generic (PLEG): container finished" podID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerID="59753f8a9ac601813cf61722fa2f680aaa9360854df772d81a25c24ca3e9ccbd" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.479398 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" event={"ID":"2c3366d9-565f-4601-acbb-b473dcfe126c","Type":"ContainerDied","Data":"59753f8a9ac601813cf61722fa2f680aaa9360854df772d81a25c24ca3e9ccbd"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.479916 4865 scope.go:117] "RemoveContainer" containerID="59753f8a9ac601813cf61722fa2f680aaa9360854df772d81a25c24ca3e9ccbd" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.480116 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-l6w6d_openstack-operators(2c3366d9-565f-4601-acbb-b473dcfe126c)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.481321 4865 generic.go:334] "Generic (PLEG): container finished" podID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerID="f3aedc3e84f03b5a8e35205c0b6b4acbbbb14f3224c2a1e020ffd763c7603f98" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.481357 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" event={"ID":"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e","Type":"ContainerDied","Data":"f3aedc3e84f03b5a8e35205c0b6b4acbbbb14f3224c2a1e020ffd763c7603f98"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.481619 4865 scope.go:117] "RemoveContainer" containerID="f3aedc3e84f03b5a8e35205c0b6b4acbbbb14f3224c2a1e020ffd763c7603f98" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.481794 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.483819 4865 generic.go:334] "Generic (PLEG): container finished" podID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerID="8def9c24761c33c45159a6ec2ce99f5dd723a4647323c64aed86dad731e312d5" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.483885 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" event={"ID":"6d4fbfc8-900e-4c44-a458-039d37a6dd40","Type":"ContainerDied","Data":"8def9c24761c33c45159a6ec2ce99f5dd723a4647323c64aed86dad731e312d5"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.484277 4865 scope.go:117] "RemoveContainer" containerID="8def9c24761c33c45159a6ec2ce99f5dd723a4647323c64aed86dad731e312d5" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.484508 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.485544 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerID="f6355aa5a5dace796906b30065a06ceac12ef8ccf3d9daab57b9c0896657f733" exitCode=1 Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.486318 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" event={"ID":"b2ea2452-dc3b-4b93-a9d4-e562a63111c9","Type":"ContainerDied","Data":"f6355aa5a5dace796906b30065a06ceac12ef8ccf3d9daab57b9c0896657f733"} Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.486354 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.486929 4865 scope.go:117] "RemoveContainer" containerID="f6355aa5a5dace796906b30065a06ceac12ef8ccf3d9daab57b9c0896657f733" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.487302 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-76c5c47f8f-p49qh_openstack-operators(b2ea2452-dc3b-4b93-a9d4-e562a63111c9)\"" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.491226 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.506292 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.526209 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.545802 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.551368 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.551451 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.566441 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.586475 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.606206 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.626275 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.647177 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.656492 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.657196 4865 scope.go:117] "RemoveContainer" containerID="6e3b76caed0d76172727765da5704f1260f0f6ff0e355debf75064878c56078f" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.657405 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=webhook-server pod=metallb-operator-webhook-server-78f5776895-s7hqg_metallb-system(9177b0d0-3ce7-40fe-8567-85cb8dd5227a)\"" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.666200 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.686438 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.716396 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.716637 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.716739 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.717912 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" containerID="578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.717944 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 578753b471b554f8c1dcfb742567e1d0f2603ce7df192e21c2b0fd40006eafb1 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.726337 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.736775 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.737141 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.738177 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.738351 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.746005 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.766718 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.786844 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.808549 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.826690 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.846926 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.862434 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.862715 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.863098 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" containerID="13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:42 crc kubenswrapper[4865]: E0123 12:58:42.863133 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13348ef9e2dc12d48edddcd4ca25ff005950a730d5f0ed1b8f4fa11a3ce11792 is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-hzwqc" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" containerName="registry-server" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.868226 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.886363 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.905877 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.925972 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.946856 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.966658 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:42 crc kubenswrapper[4865]: I0123 12:58:42.985789 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.006273 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.025959 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.046996 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.066616 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.086742 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.106983 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.125966 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: E0123 12:58:43.130891 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:43 crc kubenswrapper[4865]: E0123 12:58:43.131132 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:43 crc kubenswrapper[4865]: E0123 12:58:43.131337 4865 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" containerID="4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 12:58:43 crc kubenswrapper[4865]: E0123 12:58:43.131376 4865 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b3da019adcb6ef6bb8669e6fa3bb679d9518393c6b56e97c3f774392715e630 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.146303 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.166147 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.187000 4865 status_manager.go:851] "Failed to get status for pod" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/pods/csi-hostpathplugin-g7l9x\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.205805 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.226495 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.246238 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.266015 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.284259 4865 scope.go:117] "RemoveContainer" containerID="2504392e494c1ef358cfb124eb480bdbf70a7733b9f7b625220f52033a353160" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.286449 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.306524 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.326990 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.338695 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.339437 4865 scope.go:117] "RemoveContainer" containerID="a5571cd178bb438261317ba38387e608ad50d4bb004aa2d11391f7a29dd99411" Jan 23 12:58:43 crc kubenswrapper[4865]: E0123 12:58:43.339778 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller pod=controller-6968d8fdc4-8bjkz_metallb-system(3685d2b2-151b-479a-92c1-ae400eacd1b9)\"" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.345634 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.366130 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.386150 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.395724 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-48b72" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.406721 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.425755 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.446456 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.466582 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.486247 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.497042 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/2.log" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.497657 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/1.log" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.497970 4865 generic.go:334] "Generic (PLEG): container finished" podID="582f83b4-97dc-4f56-9879-c73fab80488a" containerID="a731dd1f940b77d2d471bc77ff3834d9ed1c0aaa7dc63e059f42afa2cba767ec" exitCode=1 Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.498083 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerDied","Data":"a731dd1f940b77d2d471bc77ff3834d9ed1c0aaa7dc63e059f42afa2cba767ec"} Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.500266 4865 scope.go:117] "RemoveContainer" containerID="a731dd1f940b77d2d471bc77ff3834d9ed1c0aaa7dc63e059f42afa2cba767ec" Jan 23 12:58:43 crc kubenswrapper[4865]: E0123 12:58:43.500570 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.502720 4865 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-4g249 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.502743 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" start-of-body= Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.502762 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.502786 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.502834 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8081/readyz\": dial tcp 10.217.0.222:8081: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.502830 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.502887 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.502902 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.502975 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.503001 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.504167 4865 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-sgsqx container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.70:5000/healthz\": dial tcp 10.217.0.70:5000: connect: connection refused" start-of-body= Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.505258 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.70:5000/healthz\": dial tcp 10.217.0.70:5000: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.505670 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.506100 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.525921 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.548563 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.566630 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.587271 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: E0123 12:58:43.606555 4865 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/mysql-db-openstack-galera-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/mysql-db-openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openstack/openstack-galera-0" volumeName="mysql-db" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.625972 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: E0123 12:58:43.646692 4865 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/mysql-db-openstack-cell1-galera-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/mysql-db-openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openstack/openstack-cell1-galera-0" volumeName="mysql-db" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.666744 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: E0123 12:58:43.686016 4865 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" volumeName="registry-storage" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.705880 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.726730 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.746714 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.766398 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.771448 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.771499 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.773488 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.773526 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.776936 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.777055 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.792462 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.792585 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.806029 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.826495 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.837797 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.883332 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 12:58:43 crc kubenswrapper[4865]: I0123 12:58:43.884075 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.010137 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.010239 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" start-of-body= Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.010295 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.028018 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.028281 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.028746 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.028949 4865 status_manager.go:851] "Failed to get status for pod" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f9669f7bd-ckgrk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.029195 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.029568 4865 status_manager.go:851] "Failed to get status for pod" podUID="c8896518-4b5b-4712-9994-0bb445a3504f" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-69f744f599-hrzcb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.029798 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.029983 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.030139 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.030287 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.046689 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.065754 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.086378 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.106630 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.126689 4865 status_manager.go:851] "Failed to get status for pod" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" pod="openstack/cinder-scheduler-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/cinder-scheduler-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.144256 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.144544 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.144494 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.144805 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.145891 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.165696 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.166120 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.185847 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.206620 4865 status_manager.go:851] "Failed to get status for pod" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/pods/csi-hostpathplugin-g7l9x\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.226514 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.246369 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.266420 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.287092 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.306090 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.326403 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.346519 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.355750 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.355804 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.366903 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.386768 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.406568 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.426430 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.446024 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.466755 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.485815 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.506187 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.515207 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/1.log" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.515773 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" event={"ID":"843c383b-053f-42f5-88ce-7a216f5354a3","Type":"ContainerStarted","Data":"10cd8fab139cf2e40a506f73cefcdd6e86a95345cfc9fa18668937771bceec47"} Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.516594 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" start-of-body= Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.516769 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.518078 4865 scope.go:117] "RemoveContainer" containerID="a731dd1f940b77d2d471bc77ff3834d9ed1c0aaa7dc63e059f42afa2cba767ec" Jan 23 12:58:44 crc kubenswrapper[4865]: E0123 12:58:44.518335 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.518711 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.518784 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.526444 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.546345 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.566242 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.586229 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.605974 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.624756 4865 scope.go:117] "RemoveContainer" containerID="593f87f62b3ccdf0be76949bdac5a423993e1d8217741c16ed8d4bfe28a7e56c" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.625764 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.645817 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.667145 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.687796 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.706488 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.726837 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.746669 4865 status_manager.go:851] "Failed to get status for pod" podUID="c63db198-8ec8-42b1-8211-d207c172706c" pod="openstack/ceilometer-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ceilometer-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.766968 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.772499 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.772590 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.786965 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.804076 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.804076 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.806530 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.826914 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.846383 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.866298 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.883969 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.884265 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.884401 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.884304 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.884577 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.885308 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"432a1a4071bbfc164b3e505f3e4dcae88a37d22aa2a060b89d9d61d60cbf9348"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.885410 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" containerID="cri-o://432a1a4071bbfc164b3e505f3e4dcae88a37d22aa2a060b89d9d61d60cbf9348" gracePeriod=30 Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.885630 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.898276 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": read tcp 10.217.0.2:37936->10.217.0.38:5443: read: connection reset by peer" start-of-body= Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.898330 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": read tcp 10.217.0.2:37936->10.217.0.38:5443: read: connection reset by peer" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.906109 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.926249 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.945895 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.966429 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:44 crc kubenswrapper[4865]: I0123 12:58:44.986575 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.005581 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: E0123 12:58:45.010501 4865 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="7s" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.026172 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.046284 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.066422 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.073287 4865 scope.go:117] "RemoveContainer" containerID="05c32a9f69fa45c4c849c2c0593634a1d358994f1d3669db97162d3139e34baf" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.086060 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.105865 4865 status_manager.go:851] "Failed to get status for pod" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" pod="openstack/ovn-controller-ovs-spv64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-ovs-spv64\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.118836 4865 scope.go:117] "RemoveContainer" containerID="d9147b8bace7855a843e97a1bac103beaa6d491e6eb97174767cc7a9b715c786" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.119376 4865 scope.go:117] "RemoveContainer" containerID="f885d8e004cc28a105f22692ba41d19be021fcaf768af9b3403a43a9e72e86cd" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.125994 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: E0123 12:58:45.145918 4865 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openstack/ovsdbserver-nb-0" volumeName="ovndbcluster-nb-etc-ovn" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.167796 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.186146 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.206357 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.226785 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.246681 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.266737 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.286173 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.306388 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.326808 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.346563 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.365726 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.386942 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.407177 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.426463 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.446984 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.466283 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.487002 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.506177 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.512426 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.512475 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.525744 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.546428 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.551155 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.551215 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.566264 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.586678 4865 status_manager.go:851] "Failed to get status for pod" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" pod="openstack/ovn-controller-ovs-spv64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-ovs-spv64\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.606308 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.626065 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.646530 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.666526 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.686276 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.705789 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.726558 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.746084 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.769222 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.773218 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.773279 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.786104 4865 status_manager.go:851] "Failed to get status for pod" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f9669f7bd-ckgrk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.806350 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.826180 4865 status_manager.go:851] "Failed to get status for pod" podUID="c8896518-4b5b-4712-9994-0bb445a3504f" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-69f744f599-hrzcb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.846046 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.866569 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.885915 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.906882 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.926460 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.946053 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.966291 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:45 crc kubenswrapper[4865]: I0123 12:58:45.986579 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.006843 4865 status_manager.go:851] "Failed to get status for pod" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" pod="openstack/cinder-scheduler-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/cinder-scheduler-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.026312 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.045894 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.066671 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.086255 4865 status_manager.go:851] "Failed to get status for pod" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/pods/csi-hostpathplugin-g7l9x\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.106133 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.126514 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.145972 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.166967 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.185912 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.207069 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.217700 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.218394 4865 scope.go:117] "RemoveContainer" containerID="6c85179785689b31c63f8780ade170488f7194ec897911ab511ac9d07ded86b1" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.225850 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.226249 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.228873 4865 scope.go:117] "RemoveContainer" containerID="82a5dc53e670de19adf070d15e3558500b1a04ef07b5381860ecdd360fb8e0fd" Jan 23 12:58:46 crc kubenswrapper[4865]: E0123 12:58:46.231079 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-69cf5d4557-9jp5b_openstack-operators(bdf8f14b-af0d-43cc-b624-7dab2879dc4b)\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.246953 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.259432 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.260563 4865 scope.go:117] "RemoveContainer" containerID="fd24bf374cb93bd1ac3be24ba239a5a2297119650e90d74686695ca9642f7f88" Jan 23 12:58:46 crc kubenswrapper[4865]: E0123 12:58:46.260883 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-b45d7bf98-4c94z_openstack-operators(10627175-8e39-4799-bec7-c0b49b938a29)\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.266253 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.290558 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.309187 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.340355 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.360176 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.361389 4865 scope.go:117] "RemoveContainer" containerID="5ab50c49bb504542d7db9345701205b0b89a3c8f45e8e144d3514ccb73b674a6" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.377872 4865 scope.go:117] "RemoveContainer" containerID="1f82bbc6562ef119de8d44283cf07658ee939e78d2e833cee725ec522543517b" Jan 23 12:58:46 crc kubenswrapper[4865]: E0123 12:58:46.381140 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.383704 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.384161 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.384441 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.387114 4865 scope.go:117] "RemoveContainer" containerID="f3aedc3e84f03b5a8e35205c0b6b4acbbbb14f3224c2a1e020ffd763c7603f98" Jan 23 12:58:46 crc kubenswrapper[4865]: E0123 12:58:46.387911 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.391086 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.391351 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.391530 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.410069 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.420291 4865 scope.go:117] "RemoveContainer" containerID="efa85f7f325947f3c6e17fa6b4b0e0f0e4613a29c14fa6a93c768879ca7375db" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.426089 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.446012 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.466908 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.486957 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.507326 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.526029 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.545827 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.566414 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.614646 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.615655 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.618663 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.619402 4865 scope.go:117] "RemoveContainer" containerID="954df5dd39f6d5ed839258a04ecc82954ab9e41d05f0cb9bba184f8fd069c651" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.626211 4865 status_manager.go:851] "Failed to get status for pod" podUID="c63db198-8ec8-42b1-8211-d207c172706c" pod="openstack/ceilometer-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ceilometer-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.629805 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8080/livez\": dial tcp 10.217.0.222:8080: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.630034 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.630593 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8081/readyz\": dial tcp 10.217.0.222:8081: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.630788 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.222:8081/readyz\": dial tcp 10.217.0.222:8081: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.646667 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.669171 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.686700 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.693004 4865 scope.go:117] "RemoveContainer" containerID="76f1ecd5b0730a0e64ce51eb0d79c203a16172b032a4b3c0ff734fdda3df422e" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.699901 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.700685 4865 scope.go:117] "RemoveContainer" containerID="232be94353aac2e87626af7b68144c0253405f7ad62d2f7221de27a4f2375137" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.708872 4865 status_manager.go:851] "Failed to get status for pod" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-b8b6d4659-9fl7w\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.734565 4865 status_manager.go:851] "Failed to get status for pod" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" pod="openstack/horizon-66f7b94cdb-f7pw2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-66f7b94cdb-f7pw2\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.745749 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.746659 4865 scope.go:117] "RemoveContainer" containerID="ffee0a65d3a9d4aaf1aaaa4f2d0daee9888f2045360ad40a337ab9bdd0bd24ba" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.750103 4865 status_manager.go:851] "Failed to get status for pod" podUID="10627175-8e39-4799-bec7-c0b49b938a29" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/designate-operator-controller-manager-b45d7bf98-4c94z\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.766592 4865 status_manager.go:851] "Failed to get status for pod" podUID="4836de1a-4a0e-4d02-af0e-3408b4814ecf" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-6b68b8b854c5br7\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.772482 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.772550 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.782270 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.785796 4865 scope.go:117] "RemoveContainer" containerID="225e8ea119b89ec53412b288a78504658e10536d73422bdffe3ca05d7a7e6596" Jan 23 12:58:46 crc kubenswrapper[4865]: E0123 12:58:46.786083 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.786460 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" pod="metallb-system/speaker-szb9h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/speaker-szb9h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.806319 4865 status_manager.go:851] "Failed to get status for pod" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-c87fff755-mlm5v\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.821944 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.823710 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.823985 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": dial tcp 10.217.0.77:8081: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.829374 4865 status_manager.go:851] "Failed to get status for pod" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/placement-operator-controller-manager-5d646b7d76-7fdbl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.847055 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.847796 4865 scope.go:117] "RemoveContainer" containerID="e7440e71b764fc1170b4e582df1fa0de60d00e2cc4d7348e19eb5ccc39b95a74" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.855718 4865 status_manager.go:851] "Failed to get status for pod" podUID="0167f850-ba43-426a-8c56-aa171131e7da" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/heat-operator-controller-manager-594c8c9d5d-fsch6\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.866293 4865 status_manager.go:851] "Failed to get status for pod" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-manager-76c5c47f8f-p49qh\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.866417 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.867177 4865 scope.go:117] "RemoveContainer" containerID="8def9c24761c33c45159a6ec2ce99f5dd723a4647323c64aed86dad731e312d5" Jan 23 12:58:46 crc kubenswrapper[4865]: E0123 12:58:46.867434 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.868655 4865 scope.go:117] "RemoveContainer" containerID="adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.888480 4865 status_manager.go:851] "Failed to get status for pod" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/tempest-tests-tempest-s00-multi-thread-testing\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.909164 4865 status_manager.go:851] "Failed to get status for pod" podUID="c63db198-8ec8-42b1-8211-d207c172706c" pod="openstack/ceilometer-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ceilometer-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.921779 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.922671 4865 scope.go:117] "RemoveContainer" containerID="f632a0ec27e84458e4e6a53018ba24d615fb3557c7f27191234cdc3926b8f3a4" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.925918 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-69cf5d4557-9jp5b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.940802 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.941745 4865 scope.go:117] "RemoveContainer" containerID="9b5e2623653f8096ac3aff4b822b14daf66396447038c4f8cf5cc198e1064fbb" Jan 23 12:58:46 crc kubenswrapper[4865]: E0123 12:58:46.942032 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.945891 4865 status_manager.go:851] "Failed to get status for pod" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" pod="openshift-marketplace/redhat-marketplace-nhd4g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-nhd4g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.966173 4865 status_manager.go:851] "Failed to get status for pod" podUID="15434cef-8cb6-4386-b761-143f1819cac8" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-cainjector-cf98fcc89-7kqtt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:46 crc kubenswrapper[4865]: I0123 12:58:46.987057 4865 status_manager.go:851] "Failed to get status for pod" podUID="c011a295-505e-465c-a8d6-a647d7ad8ed2" pod="openstack-operators/openstack-operator-index-hzwqc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-index-hzwqc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.006342 4865 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.007649 4865 scope.go:117] "RemoveContainer" containerID="0b2a7803942b15c05aaa94d320090897efd8100e0a4bcd07a1a0e623a23a3516" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.026220 4865 status_manager.go:851] "Failed to get status for pod" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" pod="openshift-marketplace/community-operators-hh6cp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-hh6cp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.046124 4865 status_manager.go:851] "Failed to get status for pod" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/octavia-operator-controller-manager-7bd9774b6-bqtq9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.057470 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.057826 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.057874 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.058272 4865 patch_prober.go:28] interesting pod/controller-manager-f9669f7bd-ckgrk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.058319 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.066673 4865 status_manager.go:851] "Failed to get status for pod" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" pod="openshift-marketplace/certified-operators-qwxxg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qwxxg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.079443 4865 scope.go:117] "RemoveContainer" containerID="220ca834be13b31e6269099d1d8bc1f8f8a8374c70fc5e8ee1abc9fd26326377" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.087518 4865 status_manager.go:851] "Failed to get status for pod" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-7489ccbc46-6gcbp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.106295 4865 status_manager.go:851] "Failed to get status for pod" podUID="c82fe7f9-37d7-4874-9b2d-ba437546562f" pod="openstack/ovn-northd-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-northd-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.126190 4865 status_manager.go:851] "Failed to get status for pod" podUID="c6cf7afb-e04b-428e-a9d6-448bec887e7e" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qtxv5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-webhook-8474b5b9d8-qtxv5\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.128992 4865 scope.go:117] "RemoveContainer" containerID="32f90a1a6ab5a61d1d5a4a1eba24f87dbec2aef63107d25e9e01782e3a02c493" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.146941 4865 status_manager.go:851] "Failed to get status for pod" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" pod="metallb-system/controller-6968d8fdc4-8bjkz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/controller-6968d8fdc4-8bjkz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.168722 4865 status_manager.go:851] "Failed to get status for pod" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/horizon-operator-controller-manager-77d5c5b54f-qftlt\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.187038 4865 status_manager.go:851] "Failed to get status for pod" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-7df9698d5d-lk94b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.207707 4865 status_manager.go:851] "Failed to get status for pod" podUID="bdee5ba9-99e1-495c-9b52-f670cbbffea2" pod="openshift-console/downloads-7954f5f757-48b72" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-48b72\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.226170 4865 status_manager.go:851] "Failed to get status for pod" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-42cdm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.245822 4865 status_manager.go:851] "Failed to get status for pod" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/manila-operator-controller-manager-78c6999f6f-bps6b\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.255079 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.266667 4865 status_manager.go:851] "Failed to get status for pod" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-547cbdb99f-zm52l\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.286125 4865 status_manager.go:851] "Failed to get status for pod" podUID="11be7549-5b2b-49e9-b11e-7035922b3673" pod="openstack/ovn-controller-ovs-spv64" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-ovs-spv64\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.296382 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.297132 4865 scope.go:117] "RemoveContainer" containerID="7b28b314a62253bab2dff6dd6dcdbd4bdcfa958e016c67c5de24c34342098c1a" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.299215 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.299555 4865 scope.go:117] "RemoveContainer" containerID="589ec817fa33d2945a77636e47bece82f5412244ceb74275ac90da7c251be8f4" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.305919 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.331110 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.346999 4865 status_manager.go:851] "Failed to get status for pod" podUID="4afdf9c5-20b1-4482-a599-36000ac58add" pod="openshift-ovn-kubernetes/ovnkube-node-hvjnd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-hvjnd\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.367248 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.369048 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/1.log" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.372442 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/0.log" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.374075 4865 generic.go:334] "Generic (PLEG): container finished" podID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerID="d3495ed84a53f12e4007a0f99ec4d52ea21f2cbe622e4f903c019346c6618125" exitCode=255 Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.374144 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerDied","Data":"d3495ed84a53f12e4007a0f99ec4d52ea21f2cbe622e4f903c019346c6618125"} Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.381894 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/2.log" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.389771 4865 status_manager.go:851] "Failed to get status for pod" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.390161 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/1.log" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.392107 4865 generic.go:334] "Generic (PLEG): container finished" podID="843c383b-053f-42f5-88ce-7a216f5354a3" containerID="10cd8fab139cf2e40a506f73cefcdd6e86a95345cfc9fa18668937771bceec47" exitCode=1 Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.392169 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" event={"ID":"843c383b-053f-42f5-88ce-7a216f5354a3","Type":"ContainerDied","Data":"10cd8fab139cf2e40a506f73cefcdd6e86a95345cfc9fa18668937771bceec47"} Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.392803 4865 scope.go:117] "RemoveContainer" containerID="10cd8fab139cf2e40a506f73cefcdd6e86a95345cfc9fa18668937771bceec47" Jan 23 12:58:47 crc kubenswrapper[4865]: E0123 12:58:47.393026 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=catalog-operator pod=catalog-operator-68c6474976-42cdm_openshift-operator-lifecycle-manager(843c383b-053f-42f5-88ce-7a216f5354a3)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.402424 4865 scope.go:117] "RemoveContainer" containerID="a731dd1f940b77d2d471bc77ff3834d9ed1c0aaa7dc63e059f42afa2cba767ec" Jan 23 12:58:47 crc kubenswrapper[4865]: E0123 12:58:47.402662 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.407014 4865 status_manager.go:851] "Failed to get status for pod" podUID="967c3782-1bce-4145-8244-7650fe19dc22" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ironic-operator-controller-manager-69d6c9f5b8-h6dkp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.427157 4865 status_manager.go:851] "Failed to get status for pod" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/telemetry-operator-controller-manager-85cd9769bb-kkkcn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.446051 4865 status_manager.go:851] "Failed to get status for pod" podUID="78884295-a3de-4e00-bcc4-6a1627b50717" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: E0123 12:58:47.450276 4865 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event=< Jan 23 12:58:47 crc kubenswrapper[4865]: &Event{ObjectMeta:{catalog-operator-68c6474976-42cdm.188d5d87792a6b03 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:catalog-operator-68c6474976-42cdm,UID:843c383b-053f-42f5-88ce-7a216f5354a3,APIVersion:v1,ResourceVersion:27211,FieldPath:spec.containers{catalog-operator},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.217.0.26:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 23 12:58:47 crc kubenswrapper[4865]: body: Jan 23 12:58:47 crc kubenswrapper[4865]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 12:57:45.061264131 +0000 UTC m=+3909.230336357,LastTimestamp:2026-01-23 12:57:45.061264131 +0000 UTC m=+3909.230336357,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 12:58:47 crc kubenswrapper[4865]: > Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.465763 4865 status_manager.go:851] "Failed to get status for pod" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-vrzqb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.486542 4865 status_manager.go:851] "Failed to get status for pod" podUID="97f32b90-08dc-4333-95e6-a2e85648931f" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f9669f7bd-ckgrk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.506769 4865 status_manager.go:851] "Failed to get status for pod" podUID="a2830362-05e6-4a49-887e-cf3d25cf65a4" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-sgsqx\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.526733 4865 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.546526 4865 status_manager.go:851] "Failed to get status for pod" podUID="c8896518-4b5b-4712-9994-0bb445a3504f" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-69f744f599-hrzcb\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.566096 4865 status_manager.go:851] "Failed to get status for pod" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-58897d9998-8lsbn\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.586284 4865 status_manager.go:851] "Failed to get status for pod" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-znx59\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.606410 4865 status_manager.go:851] "Failed to get status for pod" podUID="840bd4e6-18da-498a-bd3a-d4e80c69ec70" pod="openstack-operators/openstack-operator-controller-init-6bcd4d8dcc-2sgsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-operator-controller-init-6bcd4d8dcc-2sgsk\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.625873 4865 status_manager.go:851] "Failed to get status for pod" podUID="218b7d21-dfbb-42f7-a115-3867493d97b3" pod="openshift-nmstate/nmstate-handler-8547q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/pods/nmstate-handler-8547q\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.645910 4865 status_manager.go:851] "Failed to get status for pod" podUID="93194445-a021-4960-ab82-085f13cc959d" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/ovn-operator-controller-manager-55db956ddc-cbz92\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.665785 4865 status_manager.go:851] "Failed to get status for pod" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" pod="openshift-marketplace/redhat-operators-tqvjg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-tqvjg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.685706 4865 status_manager.go:851] "Failed to get status for pod" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-xwjxp\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.705822 4865 status_manager.go:851] "Failed to get status for pod" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" pod="openstack/cinder-scheduler-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/cinder-scheduler-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.725761 4865 status_manager.go:851] "Failed to get status for pod" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-7xpgm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.746299 4865 status_manager.go:851] "Failed to get status for pod" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-59dd8b7cbf-nppmq\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.766346 4865 status_manager.go:851] "Failed to get status for pod" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/nova-operator-controller-manager-6b8bc8d87d-6t8ts\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.776524 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:47 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:47 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:47 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.776582 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.785863 4865 status_manager.go:851] "Failed to get status for pod" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/pods/csi-hostpathplugin-g7l9x\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.805702 4865 status_manager.go:851] "Failed to get status for pod" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-5ffb9c6597-7mv2d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.826408 4865 status_manager.go:851] "Failed to get status for pod" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/pods/cert-manager-webhook-687f57d79b-x972r\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.846863 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-54ccf4f85d-l6w6d\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.866554 4865 status_manager.go:851] "Failed to get status for pod" podUID="d8331842-a45a-4cbf-a55b-0d8dde7f69eb" pod="openstack/ovn-controller-hz4vm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/ovn-controller-hz4vm\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.886211 4865 status_manager.go:851] "Failed to get status for pod" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b444d44fb-g5xkl\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.906157 4865 status_manager.go:851] "Failed to get status for pod" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-webhook-server-78f5776895-s7hqg\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.925953 4865 status_manager.go:851] "Failed to get status for pod" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/rabbitmq-cluster-operator-manager-668c99d594-fdkt9\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.946739 4865 status_manager.go:851] "Failed to get status for pod" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-6497cbfbf6-fkmfr\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.966426 4865 status_manager.go:851] "Failed to get status for pod" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-4g249\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:47 crc kubenswrapper[4865]: I0123 12:58:47.985769 4865 status_manager.go:851] "Failed to get status for pod" podUID="9faffae5-73bb-4980-8092-b79a6888476d" pod="metallb-system/frr-k8s-gh89m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-gh89m\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.005777 4865 status_manager.go:851] "Failed to get status for pod" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5d8f59fb49-hnv8g\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.026063 4865 status_manager.go:851] "Failed to get status for pod" podUID="5cf30925-0355-42db-9895-f23a97fca08e" pod="openstack/openstack-cell1-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-cell1-galera-0\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.046415 4865 status_manager.go:851] "Failed to get status for pod" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.066265 4865 status_manager.go:851] "Failed to get status for pod" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" pod="openshift-console/console-5d7d54b946-29gbz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/console-5d7d54b946-29gbz\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.110102 4865 status_manager.go:851] "Failed to get status for pod" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/frr-k8s-webhook-server-7df86c4f6c-dkvk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.110925 4865 status_manager.go:851] "Failed to get status for pod" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" pod="openshift-ingress/router-default-5444994796-swk7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/pods/router-default-5444994796-swk7h\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.126365 4865 status_manager.go:851] "Failed to get status for pod" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/glance-operator-controller-manager-78fdd796fd-8qtnc\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.146881 4865 status_manager.go:851] "Failed to get status for pod" podUID="50ab40ef-54b8-4392-89ad-6b73c346c225" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-qmwk4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/test-operator-controller-manager-69797bbcbd-qmwk4\": dial tcp 38.102.83.80:6443: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.258306 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.340265 4865 scope.go:117] "RemoveContainer" containerID="c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.347471 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.414762 4865 generic.go:334] "Generic (PLEG): container finished" podID="8e227974-40b8-4d16-8d5f-961b705a9740" containerID="19590472562d768b58a36819b0839df5422b20ecc8e2438bd400797b00c548e4" exitCode=1 Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.414863 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" event={"ID":"8e227974-40b8-4d16-8d5f-961b705a9740","Type":"ContainerDied","Data":"19590472562d768b58a36819b0839df5422b20ecc8e2438bd400797b00c548e4"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.415482 4865 scope.go:117] "RemoveContainer" containerID="19590472562d768b58a36819b0839df5422b20ecc8e2438bd400797b00c548e4" Jan 23 12:58:48 crc kubenswrapper[4865]: E0123 12:58:48.415757 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-fdkt9_openstack-operators(8e227974-40b8-4d16-8d5f-961b705a9740)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.417682 4865 generic.go:334] "Generic (PLEG): container finished" podID="967c3782-1bce-4145-8244-7650fe19dc22" containerID="ec6b92229dbc3dc459ac92cd5bff829cdf79f412c7047ece466b803430a755e2" exitCode=1 Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.417752 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" event={"ID":"967c3782-1bce-4145-8244-7650fe19dc22","Type":"ContainerDied","Data":"ec6b92229dbc3dc459ac92cd5bff829cdf79f412c7047ece466b803430a755e2"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.418390 4865 scope.go:117] "RemoveContainer" containerID="ec6b92229dbc3dc459ac92cd5bff829cdf79f412c7047ece466b803430a755e2" Jan 23 12:58:48 crc kubenswrapper[4865]: E0123 12:58:48.418638 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.421568 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" event={"ID":"1959a742-ade2-4266-9a93-e96a1b6e3908","Type":"ContainerStarted","Data":"b29868d5a9978529bde12d6e5328ff0f7fb4c7425a6fae7ca2cf9640eba7d400"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.425472 4865 generic.go:334] "Generic (PLEG): container finished" podID="0167f850-ba43-426a-8c56-aa171131e7da" containerID="b8e261b2f93c481bba3b0f111f268ed851bd5a73ba1244cfab21e04a3b5bcad8" exitCode=1 Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.425705 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" event={"ID":"0167f850-ba43-426a-8c56-aa171131e7da","Type":"ContainerDied","Data":"b8e261b2f93c481bba3b0f111f268ed851bd5a73ba1244cfab21e04a3b5bcad8"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.426585 4865 scope.go:117] "RemoveContainer" containerID="b8e261b2f93c481bba3b0f111f268ed851bd5a73ba1244cfab21e04a3b5bcad8" Jan 23 12:58:48 crc kubenswrapper[4865]: E0123 12:58:48.427123 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da)\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.432312 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" event={"ID":"93194445-a021-4960-ab82-085f13cc959d","Type":"ContainerStarted","Data":"308dca9067f380a5a0d9f4213ded0cec44fafe37706380068a2a22ca270c04ba"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.442981 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/2.log" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.443456 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/1.log" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.445993 4865 generic.go:334] "Generic (PLEG): container finished" podID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerID="9d6ae669bfddc36d31dd962756be7302311916cffdc4eaaa27943b7ba6a5ee53" exitCode=1 Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.446049 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerDied","Data":"9d6ae669bfddc36d31dd962756be7302311916cffdc4eaaa27943b7ba6a5ee53"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.446662 4865 scope.go:117] "RemoveContainer" containerID="9d6ae669bfddc36d31dd962756be7302311916cffdc4eaaa27943b7ba6a5ee53" Jan 23 12:58:48 crc kubenswrapper[4865]: E0123 12:58:48.446902 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=frr-k8s-webhook-server pod=frr-k8s-webhook-server-7df86c4f6c-dkvk4_metallb-system(4116044f-0cc3-41fb-9f26-536213e1dfa3)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.458806 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-d55dfcdfc-xwjxp_2699af1d-57a0-4ce2-9550-b423f9eafc0f/packageserver/1.log" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.464429 4865 generic.go:334] "Generic (PLEG): container finished" podID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerID="432a1a4071bbfc164b3e505f3e4dcae88a37d22aa2a060b89d9d61d60cbf9348" exitCode=2 Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.464534 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" event={"ID":"2699af1d-57a0-4ce2-9550-b423f9eafc0f","Type":"ContainerDied","Data":"432a1a4071bbfc164b3e505f3e4dcae88a37d22aa2a060b89d9d61d60cbf9348"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.464567 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" event={"ID":"2699af1d-57a0-4ce2-9550-b423f9eafc0f","Type":"ContainerStarted","Data":"a2967002ccd792bae8810b20b9e2d2cfe8625b95767e8c6b04826a95e9029999"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.468159 4865 generic.go:334] "Generic (PLEG): container finished" podID="e92ddc14-bdb6-4407-b8a3-047079030166" containerID="725cbb8bdc789381556fc95b10f61f4454ce204d5e88b36b62daaf100a191610" exitCode=1 Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.468253 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" event={"ID":"e92ddc14-bdb6-4407-b8a3-047079030166","Type":"ContainerDied","Data":"725cbb8bdc789381556fc95b10f61f4454ce204d5e88b36b62daaf100a191610"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.469058 4865 scope.go:117] "RemoveContainer" containerID="725cbb8bdc789381556fc95b10f61f4454ce204d5e88b36b62daaf100a191610" Jan 23 12:58:48 crc kubenswrapper[4865]: E0123 12:58:48.469755 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166)\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.471542 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/1.log" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.472841 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/0.log" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.473518 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerStarted","Data":"da789a528559c31f8bf0e20e446bbe2e404c5e09244ced0365c858057a65f55a"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.473789 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.479127 4865 generic.go:334] "Generic (PLEG): container finished" podID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerID="f580a6a63b6cd64621bf00584c28b619db653ef30817009688d5a3033aaf33c6" exitCode=1 Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.479207 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" event={"ID":"5fb13a32-67c3-46b1-a0b8-573e941e6c7e","Type":"ContainerDied","Data":"f580a6a63b6cd64621bf00584c28b619db653ef30817009688d5a3033aaf33c6"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.479908 4865 scope.go:117] "RemoveContainer" containerID="f580a6a63b6cd64621bf00584c28b619db653ef30817009688d5a3033aaf33c6" Jan 23 12:58:48 crc kubenswrapper[4865]: E0123 12:58:48.480142 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-59dd8b7cbf-nppmq_openstack-operators(5fb13a32-67c3-46b1-a0b8-573e941e6c7e)\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.481966 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" event={"ID":"d2f4bfa4-63e2-418a-b52a-75d2992af596","Type":"ContainerStarted","Data":"32ae1b4369c9079abde65d5f4e4fa0adee9c6e4bc077842197a00acdec6f66a3"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.497770 4865 generic.go:334] "Generic (PLEG): container finished" podID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerID="e1228c3b7d8949233ea788cf4a405373f71908b01d266909368fcf0063fd8746" exitCode=1 Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.498136 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" event={"ID":"dbfec6f5-80b4-480f-a958-c3107b2776c0","Type":"ContainerDied","Data":"e1228c3b7d8949233ea788cf4a405373f71908b01d266909368fcf0063fd8746"} Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.498550 4865 scope.go:117] "RemoveContainer" containerID="e1228c3b7d8949233ea788cf4a405373f71908b01d266909368fcf0063fd8746" Jan 23 12:58:48 crc kubenswrapper[4865]: E0123 12:58:48.498795 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0)\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.498832 4865 scope.go:117] "RemoveContainer" containerID="10cd8fab139cf2e40a506f73cefcdd6e86a95345cfc9fa18668937771bceec47" Jan 23 12:58:48 crc kubenswrapper[4865]: E0123 12:58:48.498991 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=catalog-operator pod=catalog-operator-68c6474976-42cdm_openshift-operator-lifecycle-manager(843c383b-053f-42f5-88ce-7a216f5354a3)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.688369 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.688726 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.688384 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.688790 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.785271 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:48 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:48 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:48 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.785346 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:48 crc kubenswrapper[4865]: I0123 12:58:48.985203 4865 scope.go:117] "RemoveContainer" containerID="1b9b6be821f701ac56b53a484a353bea5212b6f02ef587724d911e861b2fc97c" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.198526 4865 scope.go:117] "RemoveContainer" containerID="a2cc111c5b050a0cea0b3665386dcd21df0b26072f8ef117916e8082c8b01f56" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.415698 4865 scope.go:117] "RemoveContainer" containerID="54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.508116 4865 scope.go:117] "RemoveContainer" containerID="cc196a7d0a8483e448852dd3080814eafacc01c4fa3eef717a29e19532163b8f" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.582869 4865 generic.go:334] "Generic (PLEG): container finished" podID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerID="f6d9b4b3d5c12dd18a1e548634a2f2a1a036af095d890d878feed5bd34197f18" exitCode=1 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.582943 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" event={"ID":"429b62c2-b748-40b1-b00f-a1b0488fc5d0","Type":"ContainerDied","Data":"f6d9b4b3d5c12dd18a1e548634a2f2a1a036af095d890d878feed5bd34197f18"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.583792 4865 scope.go:117] "RemoveContainer" containerID="f6d9b4b3d5c12dd18a1e548634a2f2a1a036af095d890d878feed5bd34197f18" Jan 23 12:58:49 crc kubenswrapper[4865]: E0123 12:58:49.584165 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-5d8f59fb49-hnv8g_openstack-operators(429b62c2-b748-40b1-b00f-a1b0488fc5d0)\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.619047 4865 scope.go:117] "RemoveContainer" containerID="574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.619090 4865 generic.go:334] "Generic (PLEG): container finished" podID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerID="49ae166ad84882a23ad44fef712d0badebf178470d9efe058ee94bbc08cd4ec3" exitCode=1 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.619114 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cb0a89a-49f9-4a31-9cec-669e88882018","Type":"ContainerDied","Data":"49ae166ad84882a23ad44fef712d0badebf178470d9efe058ee94bbc08cd4ec3"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.620054 4865 scope.go:117] "RemoveContainer" containerID="49ae166ad84882a23ad44fef712d0badebf178470d9efe058ee94bbc08cd4ec3" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.634412 4865 generic.go:334] "Generic (PLEG): container finished" podID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerID="d1dcaba699a08e73d448a396063bd12ecc6334242e3ffa33fd02a518ec5c09fe" exitCode=255 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.634485 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerDied","Data":"d1dcaba699a08e73d448a396063bd12ecc6334242e3ffa33fd02a518ec5c09fe"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.635232 4865 scope.go:117] "RemoveContainer" containerID="d1dcaba699a08e73d448a396063bd12ecc6334242e3ffa33fd02a518ec5c09fe" Jan 23 12:58:49 crc kubenswrapper[4865]: E0123 12:58:49.635442 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=csi-provisioner pod=csi-hostpathplugin-g7l9x_hostpath-provisioner(f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb)\"" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.637046 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/1.log" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.637281 4865 generic.go:334] "Generic (PLEG): container finished" podID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerID="ae1508f276d444032c118fb0b67f9e5568656f5e50cd128fa88fdc40396b41ce" exitCode=1 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.637318 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerDied","Data":"ae1508f276d444032c118fb0b67f9e5568656f5e50cd128fa88fdc40396b41ce"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.637950 4865 scope.go:117] "RemoveContainer" containerID="ae1508f276d444032c118fb0b67f9e5568656f5e50cd128fa88fdc40396b41ce" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.649778 4865 generic.go:334] "Generic (PLEG): container finished" podID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerID="6db42aed1c07ce277ecd3b8215a67495dfaf2bf15960b45ae32504ccb5fd0d52" exitCode=1 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.649874 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" event={"ID":"8ef0fdaa-8086-467d-8106-5c6dec532dba","Type":"ContainerDied","Data":"6db42aed1c07ce277ecd3b8215a67495dfaf2bf15960b45ae32504ccb5fd0d52"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.650234 4865 scope.go:117] "RemoveContainer" containerID="6db42aed1c07ce277ecd3b8215a67495dfaf2bf15960b45ae32504ccb5fd0d52" Jan 23 12:58:49 crc kubenswrapper[4865]: E0123 12:58:49.650469 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba)\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.663246 4865 generic.go:334] "Generic (PLEG): container finished" podID="661fbfd2-7d52-419a-943f-c57854d2306b" containerID="c8e8f994678810f599e85ee5892ed48a42135387b679873c1a02e57882bacccd" exitCode=1 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.663331 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" event={"ID":"661fbfd2-7d52-419a-943f-c57854d2306b","Type":"ContainerDied","Data":"c8e8f994678810f599e85ee5892ed48a42135387b679873c1a02e57882bacccd"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.663992 4865 scope.go:117] "RemoveContainer" containerID="c8e8f994678810f599e85ee5892ed48a42135387b679873c1a02e57882bacccd" Jan 23 12:58:49 crc kubenswrapper[4865]: E0123 12:58:49.664216 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.674188 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/1.log" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.674698 4865 generic.go:334] "Generic (PLEG): container finished" podID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerID="b328d9b54b4bb04befe9c7fb9488ed21d048a4b9e5592f701a8c415ab5bad0a2" exitCode=255 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.674762 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" event={"ID":"a51b0d26-bdc8-433f-90e5-d90b9bd94373","Type":"ContainerDied","Data":"b328d9b54b4bb04befe9c7fb9488ed21d048a4b9e5592f701a8c415ab5bad0a2"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.675396 4865 scope.go:117] "RemoveContainer" containerID="b328d9b54b4bb04befe9c7fb9488ed21d048a4b9e5592f701a8c415ab5bad0a2" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.684819 4865 generic.go:334] "Generic (PLEG): container finished" podID="d2f4bfa4-63e2-418a-b52a-75d2992af596" containerID="32ae1b4369c9079abde65d5f4e4fa0adee9c6e4bc077842197a00acdec6f66a3" exitCode=1 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.684870 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" event={"ID":"d2f4bfa4-63e2-418a-b52a-75d2992af596","Type":"ContainerDied","Data":"32ae1b4369c9079abde65d5f4e4fa0adee9c6e4bc077842197a00acdec6f66a3"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.685182 4865 scope.go:117] "RemoveContainer" containerID="32ae1b4369c9079abde65d5f4e4fa0adee9c6e4bc077842197a00acdec6f66a3" Jan 23 12:58:49 crc kubenswrapper[4865]: E0123 12:58:49.685382 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.711742 4865 generic.go:334] "Generic (PLEG): container finished" podID="93194445-a021-4960-ab82-085f13cc959d" containerID="308dca9067f380a5a0d9f4213ded0cec44fafe37706380068a2a22ca270c04ba" exitCode=1 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.711837 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" event={"ID":"93194445-a021-4960-ab82-085f13cc959d","Type":"ContainerDied","Data":"308dca9067f380a5a0d9f4213ded0cec44fafe37706380068a2a22ca270c04ba"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.712521 4865 scope.go:117] "RemoveContainer" containerID="308dca9067f380a5a0d9f4213ded0cec44fafe37706380068a2a22ca270c04ba" Jan 23 12:58:49 crc kubenswrapper[4865]: E0123 12:58:49.712954 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.734727 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hzwqc" event={"ID":"c011a295-505e-465c-a8d6-a647d7ad8ed2","Type":"ContainerStarted","Data":"45b285a95c3331fe7f8f1732de9e603a6db6bd2482bfc3fd89deadfc53b3e9d2"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.745707 4865 generic.go:334] "Generic (PLEG): container finished" podID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerID="b29868d5a9978529bde12d6e5328ff0f7fb4c7425a6fae7ca2cf9640eba7d400" exitCode=1 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.745962 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" event={"ID":"1959a742-ade2-4266-9a93-e96a1b6e3908","Type":"ContainerDied","Data":"b29868d5a9978529bde12d6e5328ff0f7fb4c7425a6fae7ca2cf9640eba7d400"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.746550 4865 scope.go:117] "RemoveContainer" containerID="b29868d5a9978529bde12d6e5328ff0f7fb4c7425a6fae7ca2cf9640eba7d400" Jan 23 12:58:49 crc kubenswrapper[4865]: E0123 12:58:49.746824 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.748456 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/1.log" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.749194 4865 generic.go:334] "Generic (PLEG): container finished" podID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerID="e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22" exitCode=1 Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.750155 4865 scope.go:117] "RemoveContainer" containerID="e1228c3b7d8949233ea788cf4a405373f71908b01d266909368fcf0063fd8746" Jan 23 12:58:49 crc kubenswrapper[4865]: E0123 12:58:49.750559 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0)\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.750811 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerDied","Data":"e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22"} Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.751376 4865 scope.go:117] "RemoveContainer" containerID="e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.751947 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.751970 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.754586 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.780989 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:49 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:49 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:49 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.781440 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:49 crc kubenswrapper[4865]: I0123 12:58:49.806757 4865 scope.go:117] "RemoveContainer" containerID="970fbd29e7ad027b715d5162c082d4b785da78e0c7cbe380974c284c1f434308" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.084669 4865 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.084728 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.232049 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.232102 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.457792 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.458073 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.488249 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.488577 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.490919 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.490969 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.544671 4865 scope.go:117] "RemoveContainer" containerID="63205f38181e8e7e4b899f35881b81bd6c72eba848992c5ee8006e2f0700a70e" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.751499 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.751608 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.755407 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.755477 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.769283 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/2.log" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.772063 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/1.log" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.779015 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.779043 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.780984 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:50 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:50 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:50 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.781031 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.803193 4865 generic.go:334] "Generic (PLEG): container finished" podID="15434cef-8cb6-4386-b761-143f1819cac8" containerID="cb91c750d12981120827f7b542090517af3ea3a9ede28ff7cd23321b1eb4911e" exitCode=1 Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.803256 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" event={"ID":"15434cef-8cb6-4386-b761-143f1819cac8","Type":"ContainerDied","Data":"cb91c750d12981120827f7b542090517af3ea3a9ede28ff7cd23321b1eb4911e"} Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.803821 4865 scope.go:117] "RemoveContainer" containerID="cb91c750d12981120827f7b542090517af3ea3a9ede28ff7cd23321b1eb4911e" Jan 23 12:58:50 crc kubenswrapper[4865]: E0123 12:58:50.804026 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-cf98fcc89-7kqtt_cert-manager(15434cef-8cb6-4386-b761-143f1819cac8)\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" podUID="15434cef-8cb6-4386-b761-143f1819cac8" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.807781 4865 scope.go:117] "RemoveContainer" containerID="308dca9067f380a5a0d9f4213ded0cec44fafe37706380068a2a22ca270c04ba" Jan 23 12:58:50 crc kubenswrapper[4865]: E0123 12:58:50.807983 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.808609 4865 scope.go:117] "RemoveContainer" containerID="b29868d5a9978529bde12d6e5328ff0f7fb4c7425a6fae7ca2cf9640eba7d400" Jan 23 12:58:50 crc kubenswrapper[4865]: E0123 12:58:50.808878 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.809372 4865 scope.go:117] "RemoveContainer" containerID="32ae1b4369c9079abde65d5f4e4fa0adee9c6e4bc077842197a00acdec6f66a3" Jan 23 12:58:50 crc kubenswrapper[4865]: E0123 12:58:50.809545 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:58:50 crc kubenswrapper[4865]: I0123 12:58:50.918737 4865 scope.go:117] "RemoveContainer" containerID="d16a099e596d85c91fd0fa1d94c0861d76e76c4983032b1c5e97e173ecc3c6c4" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.403720 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.521034 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output=< Jan 23 12:58:51 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:58:51 crc kubenswrapper[4865]: > Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.523089 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" probeResult="failure" output=< Jan 23 12:58:51 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:58:51 crc kubenswrapper[4865]: > Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.592632 4865 scope.go:117] "RemoveContainer" containerID="ce6ae7c2846a936cf92acff3471ab484efba821c99400e231c47bf24e176f43e" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.715972 4865 scope.go:117] "RemoveContainer" containerID="1fdbf97e657e0bfd89c2f730d3b5c9a07d8e976682f4a06188f4ac6b2e76428f" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.776833 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:51 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:51 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:51 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.776902 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.809574 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.809640 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.820379 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/1.log" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.820714 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerStarted","Data":"b0ee05bb915d1c680af03f7d037e6a8f7dc71e86ef4db97ae98ae4de6c52867a"} Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.822038 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.822830 4865 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-7xpgm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.822869 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.827960 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/1.log" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.828427 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" event={"ID":"a51b0d26-bdc8-433f-90e5-d90b9bd94373","Type":"ContainerStarted","Data":"1d19778092c058b0ad0247963b89ab6e7bd59aa27e7238ddba135add037d90ee"} Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.829538 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.829689 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" start-of-body= Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.829725 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.831249 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/2.log" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.831650 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/1.log" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.832226 4865 generic.go:334] "Generic (PLEG): container finished" podID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerID="06479c01739b297256807e4542768173edc9ad564760ed9cd9a0c5e8b7c8e232" exitCode=1 Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.832273 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerDied","Data":"06479c01739b297256807e4542768173edc9ad564760ed9cd9a0c5e8b7c8e232"} Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.832692 4865 scope.go:117] "RemoveContainer" containerID="06479c01739b297256807e4542768173edc9ad564760ed9cd9a0c5e8b7c8e232" Jan 23 12:58:51 crc kubenswrapper[4865]: E0123 12:58:51.832900 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.834876 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cb0a89a-49f9-4a31-9cec-669e88882018","Type":"ContainerStarted","Data":"0078995fce3d8cc8a8d15ad4adb0633f594c03d387aab557dd4bd184e8947817"} Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.835338 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 12:58:51 crc kubenswrapper[4865]: I0123 12:58:51.893935 4865 scope.go:117] "RemoveContainer" containerID="e3fe3b1865710694ccdd89df2ca4de17a4db373f4f67811172ced80874644711" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.229439 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.545526 4865 scope.go:117] "RemoveContainer" containerID="476d6bdbc43b8e01fb3e9f46b5fac5875299d36c4c9a12328874015faac89f4f" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.554961 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.555019 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.555111 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded" start-of-body= Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.555206 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.710501 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.712532 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.738291 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.738511 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.738568 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.738671 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.738723 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.739551 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"7fa635d8424d3f66c94287f9ba0ad214fc331b0c607cab47353c96a11d4e376e"} pod="openshift-console-operator/console-operator-58897d9998-8lsbn" containerMessage="Container console-operator failed liveness probe, will be restarted" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.739624 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" containerID="cri-o://7fa635d8424d3f66c94287f9ba0ad214fc331b0c607cab47353c96a11d4e376e" gracePeriod=30 Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.757564 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": read tcp 10.217.0.2:54356->10.217.0.10:8443: read: connection reset by peer" start-of-body= Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.757650 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": read tcp 10.217.0.2:54356->10.217.0.10:8443: read: connection reset by peer" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.776370 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:52 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:52 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:52 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.776423 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.849881 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/2.log" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.850319 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/1.log" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.851416 4865 generic.go:334] "Generic (PLEG): container finished" podID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerID="b0ee05bb915d1c680af03f7d037e6a8f7dc71e86ef4db97ae98ae4de6c52867a" exitCode=1 Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.851478 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerDied","Data":"b0ee05bb915d1c680af03f7d037e6a8f7dc71e86ef4db97ae98ae4de6c52867a"} Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.852123 4865 scope.go:117] "RemoveContainer" containerID="b0ee05bb915d1c680af03f7d037e6a8f7dc71e86ef4db97ae98ae4de6c52867a" Jan 23 12:58:52 crc kubenswrapper[4865]: E0123 12:58:52.852391 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.856060 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/2.log" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.856514 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/1.log" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.857841 4865 generic.go:334] "Generic (PLEG): container finished" podID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerID="1d19778092c058b0ad0247963b89ab6e7bd59aa27e7238ddba135add037d90ee" exitCode=255 Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.857914 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" event={"ID":"a51b0d26-bdc8-433f-90e5-d90b9bd94373","Type":"ContainerDied","Data":"1d19778092c058b0ad0247963b89ab6e7bd59aa27e7238ddba135add037d90ee"} Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.858488 4865 scope.go:117] "RemoveContainer" containerID="1d19778092c058b0ad0247963b89ab6e7bd59aa27e7238ddba135add037d90ee" Jan 23 12:58:52 crc kubenswrapper[4865]: E0123 12:58:52.858741 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-7489ccbc46-6gcbp_openshift-authentication(a51b0d26-bdc8-433f-90e5-d90b9bd94373)\"" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.862631 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.862664 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:58:52 crc kubenswrapper[4865]: I0123 12:58:52.957141 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.013297 4865 scope.go:117] "RemoveContainer" containerID="34eabd6c502550b118ebbab06e0e826b6e3ea3a716d028a059c8e0fdcc47a0d5" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.118403 4865 scope.go:117] "RemoveContainer" containerID="59753f8a9ac601813cf61722fa2f680aaa9360854df772d81a25c24ca3e9ccbd" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.118976 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:58:53 crc kubenswrapper[4865]: E0123 12:58:53.119153 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.130700 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.130772 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.311880 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.493310 4865 scope.go:117] "RemoveContainer" containerID="c51c964c06647f878163c7193cda0d69f17f715564a8f339956514f2b970af5a" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.760311 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" probeResult="failure" output=< Jan 23 12:58:53 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:58:53 crc kubenswrapper[4865]: > Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.775302 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:53 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:53 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:53 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.775528 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.776698 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.776939 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.792333 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.793263 4865 scope.go:117] "RemoveContainer" containerID="9d6ae669bfddc36d31dd962756be7302311916cffdc4eaaa27943b7ba6a5ee53" Jan 23 12:58:53 crc kubenswrapper[4865]: E0123 12:58:53.793513 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with CrashLoopBackOff: \"back-off 10s restarting failed container=frr-k8s-webhook-server pod=frr-k8s-webhook-server-7df86c4f6c-dkvk4_metallb-system(4116044f-0cc3-41fb-9f26-536213e1dfa3)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.838571 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.839220 4865 scope.go:117] "RemoveContainer" containerID="10cd8fab139cf2e40a506f73cefcdd6e86a95345cfc9fa18668937771bceec47" Jan 23 12:58:53 crc kubenswrapper[4865]: E0123 12:58:53.839439 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=catalog-operator pod=catalog-operator-68c6474976-42cdm_openshift-operator-lifecycle-manager(843c383b-053f-42f5-88ce-7a216f5354a3)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.839486 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.875150 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-8lsbn_cfe7c397-99ae-494d-a418-b0f08568f156/console-operator/1.log" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.875763 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-8lsbn_cfe7c397-99ae-494d-a418-b0f08568f156/console-operator/0.log" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.875818 4865 generic.go:334] "Generic (PLEG): container finished" podID="cfe7c397-99ae-494d-a418-b0f08568f156" containerID="7fa635d8424d3f66c94287f9ba0ad214fc331b0c607cab47353c96a11d4e376e" exitCode=255 Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.875911 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" event={"ID":"cfe7c397-99ae-494d-a418-b0f08568f156","Type":"ContainerDied","Data":"7fa635d8424d3f66c94287f9ba0ad214fc331b0c607cab47353c96a11d4e376e"} Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.878332 4865 generic.go:334] "Generic (PLEG): container finished" podID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerID="0078995fce3d8cc8a8d15ad4adb0633f594c03d387aab557dd4bd184e8947817" exitCode=1 Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.878392 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cb0a89a-49f9-4a31-9cec-669e88882018","Type":"ContainerDied","Data":"0078995fce3d8cc8a8d15ad4adb0633f594c03d387aab557dd4bd184e8947817"} Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.879100 4865 scope.go:117] "RemoveContainer" containerID="0078995fce3d8cc8a8d15ad4adb0633f594c03d387aab557dd4bd184e8947817" Jan 23 12:58:53 crc kubenswrapper[4865]: E0123 12:58:53.879385 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(4cb0a89a-49f9-4a31-9cec-669e88882018)\"" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.881366 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" event={"ID":"2c3366d9-565f-4601-acbb-b473dcfe126c","Type":"ContainerStarted","Data":"cae7350f56e93ca710d0c21c2da50413d1a8d37e184decf6367e6eecde1618f1"} Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.881793 4865 scope.go:117] "RemoveContainer" containerID="10cd8fab139cf2e40a506f73cefcdd6e86a95345cfc9fa18668937771bceec47" Jan 23 12:58:53 crc kubenswrapper[4865]: E0123 12:58:53.882029 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=catalog-operator pod=catalog-operator-68c6474976-42cdm_openshift-operator-lifecycle-manager(843c383b-053f-42f5-88ce-7a216f5354a3)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.882109 4865 scope.go:117] "RemoveContainer" containerID="1d19778092c058b0ad0247963b89ab6e7bd59aa27e7238ddba135add037d90ee" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.882319 4865 scope.go:117] "RemoveContainer" containerID="b0ee05bb915d1c680af03f7d037e6a8f7dc71e86ef4db97ae98ae4de6c52867a" Jan 23 12:58:53 crc kubenswrapper[4865]: E0123 12:58:53.882332 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-7489ccbc46-6gcbp_openshift-authentication(a51b0d26-bdc8-433f-90e5-d90b9bd94373)\"" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" Jan 23 12:58:53 crc kubenswrapper[4865]: E0123 12:58:53.882592 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:58:53 crc kubenswrapper[4865]: I0123 12:58:53.913244 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-hzwqc" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.009463 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.150521 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.154254 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.166231 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.166279 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.167356 4865 scope.go:117] "RemoveContainer" containerID="06479c01739b297256807e4542768173edc9ad564760ed9cd9a0c5e8b7c8e232" Jan 23 12:58:54 crc kubenswrapper[4865]: E0123 12:58:54.167751 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.176052 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" probeResult="failure" output=< Jan 23 12:58:54 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:58:54 crc kubenswrapper[4865]: > Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.357427 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.774639 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:54 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:54 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:54 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.775955 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.803797 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.803821 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.883852 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.883920 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.884284 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.884307 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:54 crc kubenswrapper[4865]: I0123 12:58:54.892778 4865 scope.go:117] "RemoveContainer" containerID="1d19778092c058b0ad0247963b89ab6e7bd59aa27e7238ddba135add037d90ee" Jan 23 12:58:54 crc kubenswrapper[4865]: E0123 12:58:54.893008 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-7489ccbc46-6gcbp_openshift-authentication(a51b0d26-bdc8-433f-90e5-d90b9bd94373)\"" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" Jan 23 12:58:55 crc kubenswrapper[4865]: I0123 12:58:55.119830 4865 scope.go:117] "RemoveContainer" containerID="66b146c5353ddb9b635d97c53b489328a11e843c79525d2f1a00e177a906335e" Jan 23 12:58:55 crc kubenswrapper[4865]: I0123 12:58:55.512389 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:55 crc kubenswrapper[4865]: I0123 12:58:55.512443 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:55 crc kubenswrapper[4865]: I0123 12:58:55.551370 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:55 crc kubenswrapper[4865]: I0123 12:58:55.551431 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:55 crc kubenswrapper[4865]: I0123 12:58:55.551491 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:55 crc kubenswrapper[4865]: I0123 12:58:55.551507 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:55 crc kubenswrapper[4865]: I0123 12:58:55.773918 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:55 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:55 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:55 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:55 crc kubenswrapper[4865]: I0123 12:58:55.773974 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:55 crc kubenswrapper[4865]: I0123 12:58:55.987402 4865 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.253119856s: [/var/lib/containers/storage/overlay/756bc51966a81465c5b9f363a7e8a18d63d9d44467d5337eb7791b55e0934b3a/diff /var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-d5wwt_f1ba215d-080e-4aed-acb5-0c01cb2abacc/nmstate-console-plugin/0.log]; will not log again for this container unless duration exceeds 2s Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.046571 4865 scope.go:117] "RemoveContainer" containerID="8b6c74d2e551ed18d0e83b546e1c83a5fe9bf0ff237daee33f85aa343ab45a7d" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.118711 4865 scope.go:117] "RemoveContainer" containerID="f6355aa5a5dace796906b30065a06ceac12ef8ccf3d9daab57b9c0896657f733" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.217745 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.218917 4865 scope.go:117] "RemoveContainer" containerID="f580a6a63b6cd64621bf00584c28b619db653ef30817009688d5a3033aaf33c6" Jan 23 12:58:56 crc kubenswrapper[4865]: E0123 12:58:56.219126 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-59dd8b7cbf-nppmq_openstack-operators(5fb13a32-67c3-46b1-a0b8-573e941e6c7e)\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.347985 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.349411 4865 scope.go:117] "RemoveContainer" containerID="b8e261b2f93c481bba3b0f111f268ed851bd5a73ba1244cfab21e04a3b5bcad8" Jan 23 12:58:56 crc kubenswrapper[4865]: E0123 12:58:56.349645 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da)\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.387306 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.618994 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.619992 4865 scope.go:117] "RemoveContainer" containerID="ec6b92229dbc3dc459ac92cd5bff829cdf79f412c7047ece466b803430a755e2" Jan 23 12:58:56 crc kubenswrapper[4865]: E0123 12:58:56.620284 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.629111 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.629917 4865 scope.go:117] "RemoveContainer" containerID="0078995fce3d8cc8a8d15ad4adb0633f594c03d387aab557dd4bd184e8947817" Jan 23 12:58:56 crc kubenswrapper[4865]: E0123 12:58:56.630133 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(4cb0a89a-49f9-4a31-9cec-669e88882018)\"" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.700235 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.701242 4865 scope.go:117] "RemoveContainer" containerID="725cbb8bdc789381556fc95b10f61f4454ce204d5e88b36b62daaf100a191610" Jan 23 12:58:56 crc kubenswrapper[4865]: E0123 12:58:56.701547 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166)\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.744587 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.745516 4865 scope.go:117] "RemoveContainer" containerID="32ae1b4369c9079abde65d5f4e4fa0adee9c6e4bc077842197a00acdec6f66a3" Jan 23 12:58:56 crc kubenswrapper[4865]: E0123 12:58:56.745848 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.774999 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:56 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:56 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:56 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.775052 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.845170 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.845947 4865 scope.go:117] "RemoveContainer" containerID="b29868d5a9978529bde12d6e5328ff0f7fb4c7425a6fae7ca2cf9640eba7d400" Jan 23 12:58:56 crc kubenswrapper[4865]: E0123 12:58:56.846245 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.921530 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:58:56 crc kubenswrapper[4865]: I0123 12:58:56.922534 4865 scope.go:117] "RemoveContainer" containerID="308dca9067f380a5a0d9f4213ded0cec44fafe37706380068a2a22ca270c04ba" Jan 23 12:58:56 crc kubenswrapper[4865]: E0123 12:58:56.922774 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.118794 4865 scope.go:117] "RemoveContainer" containerID="6e3b76caed0d76172727765da5704f1260f0f6ff0e355debf75064878c56078f" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.119770 4865 scope.go:117] "RemoveContainer" containerID="fd24bf374cb93bd1ac3be24ba239a5a2297119650e90d74686695ca9642f7f88" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.255553 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.255961 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-8lsbn_cfe7c397-99ae-494d-a418-b0f08568f156/console-operator/1.log" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.256760 4865 scope.go:117] "RemoveContainer" containerID="e1228c3b7d8949233ea788cf4a405373f71908b01d266909368fcf0063fd8746" Jan 23 12:58:57 crc kubenswrapper[4865]: E0123 12:58:57.257061 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0)\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.257657 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-8lsbn_cfe7c397-99ae-494d-a418-b0f08568f156/console-operator/0.log" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.257756 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" event={"ID":"cfe7c397-99ae-494d-a418-b0f08568f156","Type":"ContainerStarted","Data":"b872b0e0bd043d0642329878eeb0e70a3ea40665e2b3ce5f0fe8633692775440"} Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.277198 4865 generic.go:334] "Generic (PLEG): container finished" podID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerID="cae7350f56e93ca710d0c21c2da50413d1a8d37e184decf6367e6eecde1618f1" exitCode=1 Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.277261 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" event={"ID":"2c3366d9-565f-4601-acbb-b473dcfe126c","Type":"ContainerDied","Data":"cae7350f56e93ca710d0c21c2da50413d1a8d37e184decf6367e6eecde1618f1"} Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.277317 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f9669f7bd-ckgrk" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.278248 4865 scope.go:117] "RemoveContainer" containerID="cae7350f56e93ca710d0c21c2da50413d1a8d37e184decf6367e6eecde1618f1" Jan 23 12:58:57 crc kubenswrapper[4865]: E0123 12:58:57.278480 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-l6w6d_openstack-operators(2c3366d9-565f-4601-acbb-b473dcfe126c)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.299727 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.300004 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.300399 4865 scope.go:117] "RemoveContainer" containerID="c8e8f994678810f599e85ee5892ed48a42135387b679873c1a02e57882bacccd" Jan 23 12:58:57 crc kubenswrapper[4865]: E0123 12:58:57.300628 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.300789 4865 scope.go:117] "RemoveContainer" containerID="6db42aed1c07ce277ecd3b8215a67495dfaf2bf15960b45ae32504ccb5fd0d52" Jan 23 12:58:57 crc kubenswrapper[4865]: E0123 12:58:57.301097 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba)\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.565988 4865 scope.go:117] "RemoveContainer" containerID="adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064" Jan 23 12:58:57 crc kubenswrapper[4865]: E0123 12:58:57.567706 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064\": container with ID starting with adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064 not found: ID does not exist" containerID="adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.567831 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064"} err="failed to get container status \"adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064\": rpc error: code = NotFound desc = could not find container \"adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064\": container with ID starting with adfc562f0e7157ba8699914ed08ba7fb37f3fcc6b91d71085090a0baba581064 not found: ID does not exist" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.567920 4865 scope.go:117] "RemoveContainer" containerID="eeb461b6cb630a97f0fbc5e12f059a8993d241deb81f696209187f4282c21944" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.671322 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.698577 4865 scope.go:117] "RemoveContainer" containerID="eab4cbda6d14d24be83659972215f2f801fab258cb4ad7de3a085c70e05d8d00" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.700570 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ca-certs\") pod \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.700727 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-temporary\") pod \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.700781 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.700852 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ssh-key\") pod \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.700921 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config\") pod \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.701001 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config-secret\") pod \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.701034 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-workdir\") pod \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.701099 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nnnj\" (UniqueName: \"kubernetes.io/projected/6083e716-8bbf-40bf-abdd-87e865a2f7ae-kube-api-access-9nnnj\") pod \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.701161 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-config-data\") pod \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\" (UID: \"6083e716-8bbf-40bf-abdd-87e865a2f7ae\") " Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.702415 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "6083e716-8bbf-40bf-abdd-87e865a2f7ae" (UID: "6083e716-8bbf-40bf-abdd-87e865a2f7ae"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.702661 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-config-data" (OuterVolumeSpecName: "config-data") pod "6083e716-8bbf-40bf-abdd-87e865a2f7ae" (UID: "6083e716-8bbf-40bf-abdd-87e865a2f7ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.721426 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6083e716-8bbf-40bf-abdd-87e865a2f7ae-kube-api-access-9nnnj" (OuterVolumeSpecName: "kube-api-access-9nnnj") pod "6083e716-8bbf-40bf-abdd-87e865a2f7ae" (UID: "6083e716-8bbf-40bf-abdd-87e865a2f7ae"). InnerVolumeSpecName "kube-api-access-9nnnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.721734 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "6083e716-8bbf-40bf-abdd-87e865a2f7ae" (UID: "6083e716-8bbf-40bf-abdd-87e865a2f7ae"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.765821 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "6083e716-8bbf-40bf-abdd-87e865a2f7ae" (UID: "6083e716-8bbf-40bf-abdd-87e865a2f7ae"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.766525 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6083e716-8bbf-40bf-abdd-87e865a2f7ae" (UID: "6083e716-8bbf-40bf-abdd-87e865a2f7ae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.780489 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "6083e716-8bbf-40bf-abdd-87e865a2f7ae" (UID: "6083e716-8bbf-40bf-abdd-87e865a2f7ae"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.782208 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:57 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:57 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:57 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.782252 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.804576 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.804651 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nnnj\" (UniqueName: \"kubernetes.io/projected/6083e716-8bbf-40bf-abdd-87e865a2f7ae-kube-api-access-9nnnj\") on node \"crc\" DevicePath \"\"" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.804665 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.804677 4865 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.804689 4865 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.804736 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.804749 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6083e716-8bbf-40bf-abdd-87e865a2f7ae-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.808917 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "6083e716-8bbf-40bf-abdd-87e865a2f7ae" (UID: "6083e716-8bbf-40bf-abdd-87e865a2f7ae"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.826554 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "6083e716-8bbf-40bf-abdd-87e865a2f7ae" (UID: "6083e716-8bbf-40bf-abdd-87e865a2f7ae"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.846258 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.906376 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.906406 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6083e716-8bbf-40bf-abdd-87e865a2f7ae-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 12:58:57 crc kubenswrapper[4865]: I0123 12:58:57.906416 4865 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/6083e716-8bbf-40bf-abdd-87e865a2f7ae-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 23 12:58:58 crc kubenswrapper[4865]: E0123 12:58:58.046764 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9177b0d0_3ce7_40fe_8567_85cb8dd5227a.slice/crio-1bea2df943dd4b2320c01129fc3e1f605f1e54d306d39726538cd5cb68181c29.scope\": RecentStats: unable to find data in memory cache]" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.118396 4865 scope.go:117] "RemoveContainer" containerID="a5571cd178bb438261317ba38387e608ad50d4bb004aa2d11391f7a29dd99411" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.207255 4865 scope.go:117] "RemoveContainer" containerID="574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc" Jan 23 12:58:58 crc kubenswrapper[4865]: E0123 12:58:58.207639 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc\": container with ID starting with 574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc not found: ID does not exist" containerID="574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.207671 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc"} err="failed to get container status \"574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc\": rpc error: code = NotFound desc = could not find container \"574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc\": container with ID starting with 574fd798a3071f362cd7f50d7ac6a3214d8e9b099a0d6551a71bc7aceddb99dc not found: ID does not exist" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.207688 4865 scope.go:117] "RemoveContainer" containerID="f885d8e004cc28a105f22692ba41d19be021fcaf768af9b3403a43a9e72e86cd" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.346053 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.561184 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.561231 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.561474 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.561424 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.561567 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.562451 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"da789a528559c31f8bf0e20e446bbe2e404c5e09244ced0365c858057a65f55a"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.562495 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" containerID="cri-o://da789a528559c31f8bf0e20e446bbe2e404c5e09244ced0365c858057a65f55a" gracePeriod=30 Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.574059 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": read tcp 10.217.0.2:34942->10.217.0.11:8443: read: connection reset by peer" start-of-body= Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.574119 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": read tcp 10.217.0.2:34942->10.217.0.11:8443: read: connection reset by peer" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.686945 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.687724 4865 scope.go:117] "RemoveContainer" containerID="b0ee05bb915d1c680af03f7d037e6a8f7dc71e86ef4db97ae98ae4de6c52867a" Jan 23 12:58:58 crc kubenswrapper[4865]: E0123 12:58:58.687993 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.781042 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:58 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:58 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:58 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.781105 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.822786 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/2.log" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.827060 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"6083e716-8bbf-40bf-abdd-87e865a2f7ae","Type":"ContainerDied","Data":"887e44558db70fa1643ba6c8e9fc27aad4c10fc0af6af73d30fcc367208a65f7"} Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.827091 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="887e44558db70fa1643ba6c8e9fc27aad4c10fc0af6af73d30fcc367208a65f7" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.827151 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.831859 4865 generic.go:334] "Generic (PLEG): container finished" podID="10627175-8e39-4799-bec7-c0b49b938a29" containerID="60deb2e894053495e810e7cdb7878c53f79f0b4b14c436447990b3d38be4649d" exitCode=1 Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.831936 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" event={"ID":"10627175-8e39-4799-bec7-c0b49b938a29","Type":"ContainerDied","Data":"60deb2e894053495e810e7cdb7878c53f79f0b4b14c436447990b3d38be4649d"} Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.832327 4865 scope.go:117] "RemoveContainer" containerID="60deb2e894053495e810e7cdb7878c53f79f0b4b14c436447990b3d38be4649d" Jan 23 12:58:58 crc kubenswrapper[4865]: E0123 12:58:58.832646 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-b45d7bf98-4c94z_openstack-operators(10627175-8e39-4799-bec7-c0b49b938a29)\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.846153 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" containerID="c8e25097f0f83c69e0e2913b1e2a64e7f61dc0ac6daed1525a600f59e05e5e02" exitCode=1 Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.846245 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" event={"ID":"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b","Type":"ContainerDied","Data":"c8e25097f0f83c69e0e2913b1e2a64e7f61dc0ac6daed1525a600f59e05e5e02"} Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.864790 4865 generic.go:334] "Generic (PLEG): container finished" podID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerID="1bea2df943dd4b2320c01129fc3e1f605f1e54d306d39726538cd5cb68181c29" exitCode=1 Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.864881 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" event={"ID":"9177b0d0-3ce7-40fe-8567-85cb8dd5227a","Type":"ContainerDied","Data":"1bea2df943dd4b2320c01129fc3e1f605f1e54d306d39726538cd5cb68181c29"} Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.876418 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/2.log" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.884549 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/1.log" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.886285 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerID="53713b2e2ad169d7c03a338fa5a445d6705e8dd1da1084b2d62b6ecffc0a9f6b" exitCode=1 Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.886326 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" event={"ID":"b2ea2452-dc3b-4b93-a9d4-e562a63111c9","Type":"ContainerDied","Data":"53713b2e2ad169d7c03a338fa5a445d6705e8dd1da1084b2d62b6ecffc0a9f6b"} Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.914590 4865 scope.go:117] "RemoveContainer" containerID="53713b2e2ad169d7c03a338fa5a445d6705e8dd1da1084b2d62b6ecffc0a9f6b" Jan 23 12:58:58 crc kubenswrapper[4865]: E0123 12:58:58.914910 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-76c5c47f8f-p49qh_openstack-operators(b2ea2452-dc3b-4b93-a9d4-e562a63111c9)\"" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.916860 4865 scope.go:117] "RemoveContainer" containerID="cae7350f56e93ca710d0c21c2da50413d1a8d37e184decf6367e6eecde1618f1" Jan 23 12:58:58 crc kubenswrapper[4865]: E0123 12:58:58.917076 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-l6w6d_openstack-operators(2c3366d9-565f-4601-acbb-b473dcfe126c)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.917111 4865 scope.go:117] "RemoveContainer" containerID="1bea2df943dd4b2320c01129fc3e1f605f1e54d306d39726538cd5cb68181c29" Jan 23 12:58:58 crc kubenswrapper[4865]: E0123 12:58:58.917266 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=webhook-server pod=metallb-operator-webhook-server-78f5776895-s7hqg_metallb-system(9177b0d0-3ce7-40fe-8567-85cb8dd5227a)\"" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" Jan 23 12:58:58 crc kubenswrapper[4865]: I0123 12:58:58.921279 4865 scope.go:117] "RemoveContainer" containerID="c8e25097f0f83c69e0e2913b1e2a64e7f61dc0ac6daed1525a600f59e05e5e02" Jan 23 12:58:58 crc kubenswrapper[4865]: E0123 12:58:58.922522 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-7df9698d5d-lk94b_metallb-system(d1a0503d-3fc4-45b6-87c0-7af4a7246a4b)\"" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.013553 4865 scope.go:117] "RemoveContainer" containerID="954df5dd39f6d5ed839258a04ecc82954ab9e41d05f0cb9bba184f8fd069c651" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.114514 4865 scope.go:117] "RemoveContainer" containerID="5ab50c49bb504542d7db9345701205b0b89a3c8f45e8e144d3514ccb73b674a6" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.127487 4865 scope.go:117] "RemoveContainer" containerID="f3aedc3e84f03b5a8e35205c0b6b4acbbbb14f3224c2a1e020ffd763c7603f98" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.128155 4865 scope.go:117] "RemoveContainer" containerID="1f82bbc6562ef119de8d44283cf07658ee939e78d2e833cee725ec522543517b" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.162426 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.162515 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.664506 4865 scope.go:117] "RemoveContainer" containerID="c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052" Jan 23 12:58:59 crc kubenswrapper[4865]: E0123 12:58:59.664945 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052\": container with ID starting with c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052 not found: ID does not exist" containerID="c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.664977 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052"} err="failed to get container status \"c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052\": rpc error: code = NotFound desc = could not find container \"c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052\": container with ID starting with c92bfb8e6d43c3d4fbda8139c93f96215bfe4dc0c5abb0a744afe2107315a052 not found: ID does not exist" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.664999 4865 scope.go:117] "RemoveContainer" containerID="c6bcaf8ad19683b140c9c0ef03792fd1daa4e728dc17ee7c8d7fadfa8d25607c" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.777119 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:58:59 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:58:59 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:58:59 crc kubenswrapper[4865]: healthz check failed Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.780227 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.901064 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" event={"ID":"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e","Type":"ContainerStarted","Data":"cdfd3ac5584103b22c2a79a31ed0095107b482a2f21d9168033a89df7eff77ee"} Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.901846 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.905320 4865 generic.go:334] "Generic (PLEG): container finished" podID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerID="548856cde1055956756c947adf57d4b5401f31359b6a9014ce3c9d05d88051cf" exitCode=1 Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.905492 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerDied","Data":"548856cde1055956756c947adf57d4b5401f31359b6a9014ce3c9d05d88051cf"} Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.907015 4865 scope.go:117] "RemoveContainer" containerID="548856cde1055956756c947adf57d4b5401f31359b6a9014ce3c9d05d88051cf" Jan 23 12:58:59 crc kubenswrapper[4865]: E0123 12:58:59.907557 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=controller pod=controller-6968d8fdc4-8bjkz_metallb-system(3685d2b2-151b-479a-92c1-ae400eacd1b9)\"" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.909431 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" event={"ID":"da1cf187-8918-46b4-ab33-e8912c9d0dd6","Type":"ContainerStarted","Data":"0c6031c52e32ce61747d89289bc49fa5f3b122eca73c4bd9f57be015ec527eb9"} Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.910241 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.913329 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/2.log" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.914001 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/1.log" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.915649 4865 generic.go:334] "Generic (PLEG): container finished" podID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerID="da789a528559c31f8bf0e20e446bbe2e404c5e09244ced0365c858057a65f55a" exitCode=255 Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.916226 4865 scope.go:117] "RemoveContainer" containerID="53713b2e2ad169d7c03a338fa5a445d6705e8dd1da1084b2d62b6ecffc0a9f6b" Jan 23 12:58:59 crc kubenswrapper[4865]: E0123 12:58:59.916460 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-76c5c47f8f-p49qh_openstack-operators(b2ea2452-dc3b-4b93-a9d4-e562a63111c9)\"" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.916514 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerDied","Data":"da789a528559c31f8bf0e20e446bbe2e404c5e09244ced0365c858057a65f55a"} Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.916831 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:58:59 crc kubenswrapper[4865]: I0123 12:58:59.916872 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.118705 4865 scope.go:117] "RemoveContainer" containerID="8def9c24761c33c45159a6ec2ce99f5dd723a4647323c64aed86dad731e312d5" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.120062 4865 scope.go:117] "RemoveContainer" containerID="82a5dc53e670de19adf070d15e3558500b1a04ef07b5381860ecdd360fb8e0fd" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.126881 4865 scope.go:117] "RemoveContainer" containerID="9b5e2623653f8096ac3aff4b822b14daf66396447038c4f8cf5cc198e1064fbb" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.488418 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.488801 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.550841 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.550915 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.700950 4865 scope.go:117] "RemoveContainer" containerID="54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389" Jan 23 12:59:00 crc kubenswrapper[4865]: E0123 12:59:00.701343 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389\": container with ID starting with 54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389 not found: ID does not exist" containerID="54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.701369 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389"} err="failed to get container status \"54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389\": rpc error: code = NotFound desc = could not find container \"54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389\": container with ID starting with 54ca7afb09fe0ec07df2ad856450ad61b0ca5961dc442c92fdbadbb79bc32389 not found: ID does not exist" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.701391 4865 scope.go:117] "RemoveContainer" containerID="295f055b70b23f536bbb0d34672057382274d320b9f85b221c28b54f85445626" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.775498 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:00 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:00 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:00 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.775560 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.865208 4865 scope.go:117] "RemoveContainer" containerID="232be94353aac2e87626af7b68144c0253405f7ad62d2f7221de27a4f2375137" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.926093 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/2.log" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.926859 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/1.log" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.927306 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerStarted","Data":"1c7f453a8aa0ae056bac8b7a278a7f7156ab8b37aff06ebff06217df84e970bb"} Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.927417 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.929901 4865 generic.go:334] "Generic (PLEG): container finished" podID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerID="0c6031c52e32ce61747d89289bc49fa5f3b122eca73c4bd9f57be015ec527eb9" exitCode=1 Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.929965 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" event={"ID":"da1cf187-8918-46b4-ab33-e8912c9d0dd6","Type":"ContainerDied","Data":"0c6031c52e32ce61747d89289bc49fa5f3b122eca73c4bd9f57be015ec527eb9"} Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.930626 4865 scope.go:117] "RemoveContainer" containerID="0c6031c52e32ce61747d89289bc49fa5f3b122eca73c4bd9f57be015ec527eb9" Jan 23 12:59:00 crc kubenswrapper[4865]: E0123 12:59:00.930897 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.931912 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" event={"ID":"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73","Type":"ContainerStarted","Data":"e8bba2b0d16880a680c2f1646d719f787c9086585a89aecce074435b389bda88"} Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.932385 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.934783 4865 generic.go:334] "Generic (PLEG): container finished" podID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerID="cdfd3ac5584103b22c2a79a31ed0095107b482a2f21d9168033a89df7eff77ee" exitCode=1 Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.934846 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" event={"ID":"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e","Type":"ContainerDied","Data":"cdfd3ac5584103b22c2a79a31ed0095107b482a2f21d9168033a89df7eff77ee"} Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.935517 4865 scope.go:117] "RemoveContainer" containerID="cdfd3ac5584103b22c2a79a31ed0095107b482a2f21d9168033a89df7eff77ee" Jan 23 12:59:00 crc kubenswrapper[4865]: E0123 12:59:00.935764 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.937987 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" event={"ID":"6d4fbfc8-900e-4c44-a458-039d37a6dd40","Type":"ContainerStarted","Data":"6cf78a2b1d1d694c28eefbe3bd33e8e63f79645642989bd83eeeaa0c3233b15d"} Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.938540 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.940581 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" event={"ID":"bdf8f14b-af0d-43cc-b624-7dab2879dc4b","Type":"ContainerStarted","Data":"d73a1b61243fadfa9f65fa4fdf2278989e76adf6f98cb19d257fa9e32b8d1db3"} Jan 23 12:59:00 crc kubenswrapper[4865]: I0123 12:59:00.941039 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:59:01 crc kubenswrapper[4865]: I0123 12:59:01.029808 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-d55dfcdfc-xwjxp_2699af1d-57a0-4ce2-9550-b423f9eafc0f/packageserver/1.log" Jan 23 12:59:01 crc kubenswrapper[4865]: I0123 12:59:01.030802 4865 scope.go:117] "RemoveContainer" containerID="53713b2e2ad169d7c03a338fa5a445d6705e8dd1da1084b2d62b6ecffc0a9f6b" Jan 23 12:59:01 crc kubenswrapper[4865]: E0123 12:59:01.031033 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-76c5c47f8f-p49qh_openstack-operators(b2ea2452-dc3b-4b93-a9d4-e562a63111c9)\"" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" Jan 23 12:59:01 crc kubenswrapper[4865]: I0123 12:59:01.118421 4865 scope.go:117] "RemoveContainer" containerID="d1dcaba699a08e73d448a396063bd12ecc6334242e3ffa33fd02a518ec5c09fe" Jan 23 12:59:01 crc kubenswrapper[4865]: I0123 12:59:01.359494 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output=< Jan 23 12:59:01 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:59:01 crc kubenswrapper[4865]: > Jan 23 12:59:01 crc kubenswrapper[4865]: I0123 12:59:01.382942 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:01 crc kubenswrapper[4865]: I0123 12:59:01.524164 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nhd4g" podUID="c9ae9da8-9e6d-44ba-82c9-9842698cfa4f" containerName="registry-server" probeResult="failure" output=< Jan 23 12:59:01 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:59:01 crc kubenswrapper[4865]: > Jan 23 12:59:01 crc kubenswrapper[4865]: I0123 12:59:01.737357 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 12:59:01 crc kubenswrapper[4865]: I0123 12:59:01.775629 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:01 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:01 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:01 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:01 crc kubenswrapper[4865]: I0123 12:59:01.775689 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.009591 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.009654 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.010421 4865 scope.go:117] "RemoveContainer" containerID="cae7350f56e93ca710d0c21c2da50413d1a8d37e184decf6367e6eecde1618f1" Jan 23 12:59:02 crc kubenswrapper[4865]: E0123 12:59:02.010715 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-l6w6d_openstack-operators(2c3366d9-565f-4601-acbb-b473dcfe126c)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.040197 4865 scope.go:117] "RemoveContainer" containerID="0c6031c52e32ce61747d89289bc49fa5f3b122eca73c4bd9f57be015ec527eb9" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.040445 4865 scope.go:117] "RemoveContainer" containerID="cdfd3ac5584103b22c2a79a31ed0095107b482a2f21d9168033a89df7eff77ee" Jan 23 12:59:02 crc kubenswrapper[4865]: E0123 12:59:02.040495 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:59:02 crc kubenswrapper[4865]: E0123 12:59:02.040771 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.117862 4865 scope.go:117] "RemoveContainer" containerID="cb91c750d12981120827f7b542090517af3ea3a9ede28ff7cd23321b1eb4911e" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.118519 4865 scope.go:117] "RemoveContainer" containerID="225e8ea119b89ec53412b288a78504658e10536d73422bdffe3ca05d7a7e6596" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.144582 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.145706 4865 scope.go:117] "RemoveContainer" containerID="c8e25097f0f83c69e0e2913b1e2a64e7f61dc0ac6daed1525a600f59e05e5e02" Jan 23 12:59:02 crc kubenswrapper[4865]: E0123 12:59:02.146161 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-7df9698d5d-lk94b_metallb-system(d1a0503d-3fc4-45b6-87c0-7af4a7246a4b)\"" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.239858 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-sgsqx" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.307456 4865 scope.go:117] "RemoveContainer" containerID="6c85179785689b31c63f8780ade170488f7194ec897911ab511ac9d07ded86b1" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.654358 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.654635 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.655085 4865 scope.go:117] "RemoveContainer" containerID="1bea2df943dd4b2320c01129fc3e1f605f1e54d306d39726538cd5cb68181c29" Jan 23 12:59:02 crc kubenswrapper[4865]: E0123 12:59:02.655314 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=webhook-server pod=metallb-operator-webhook-server-78f5776895-s7hqg_metallb-system(9177b0d0-3ce7-40fe-8567-85cb8dd5227a)\"" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.737819 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.737870 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.737896 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.737945 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.774413 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:02 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:02 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:02 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:02 crc kubenswrapper[4865]: I0123 12:59:02.774473 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.118391 4865 scope.go:117] "RemoveContainer" containerID="a731dd1f940b77d2d471bc77ff3834d9ed1c0aaa7dc63e059f42afa2cba767ec" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.118651 4865 scope.go:117] "RemoveContainer" containerID="19590472562d768b58a36819b0839df5422b20ecc8e2438bd400797b00c548e4" Jan 23 12:59:03 crc kubenswrapper[4865]: E0123 12:59:03.118938 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-fdkt9_openstack-operators(8e227974-40b8-4d16-8d5f-961b705a9740)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.276254 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.339703 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.340038 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.340819 4865 scope.go:117] "RemoveContainer" containerID="548856cde1055956756c947adf57d4b5401f31359b6a9014ce3c9d05d88051cf" Jan 23 12:59:03 crc kubenswrapper[4865]: E0123 12:59:03.341046 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=controller pod=controller-6968d8fdc4-8bjkz_metallb-system(3685d2b2-151b-479a-92c1-ae400eacd1b9)\"" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.738466 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.738538 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.775658 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:03 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:03 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:03 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.775719 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.777534 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.777582 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.777533 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.779308 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.779516 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"1f97d13aa3ce86a1d2a02f51ffbd89b438cecc4f57a86a864f771252de8c9b3f"} pod="metallb-system/frr-k8s-gh89m" containerMessage="Container controller failed liveness probe, will be restarted" Jan 23 12:59:03 crc kubenswrapper[4865]: I0123 12:59:03.779696 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" containerID="cri-o://1f97d13aa3ce86a1d2a02f51ffbd89b438cecc4f57a86a864f771252de8c9b3f" gracePeriod=2 Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.334242 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qwxxg" podUID="2bcb4671-0b01-435d-aa4b-b9596654bfff" containerName="registry-server" probeResult="failure" output=< Jan 23 12:59:04 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:59:04 crc kubenswrapper[4865]: > Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.336775 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hh6cp" podUID="14894ab1-ecfc-4a37-a4f3-bc526eb55ce2" containerName="registry-server" probeResult="failure" output=< Jan 23 12:59:04 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:59:04 crc kubenswrapper[4865]: > Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.355637 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.512150 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.512204 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.512481 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.512589 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.551551 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.551624 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.551626 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.551668 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.774468 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:04 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:04 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:04 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.774518 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.803640 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.803755 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.803800 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-szb9h" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.804581 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"07df3a1af2b9bce0fd1cfcba17c0038c1597e26cafddd2b98e53aadb8fdae6e7"} pod="metallb-system/speaker-szb9h" containerMessage="Container speaker failed liveness probe, will be restarted" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.804652 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" containerID="cri-o://07df3a1af2b9bce0fd1cfcba17c0038c1597e26cafddd2b98e53aadb8fdae6e7" gracePeriod=2 Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.804748 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.884500 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.884816 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.884615 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:04 crc kubenswrapper[4865]: I0123 12:59:04.885009 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.078283 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerStarted","Data":"59d3de8a13e8835c1ed76b4149d3cd990bce2009bed5df37c5208d95bb6ad7ef"} Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.080682 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/2.log" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.080753 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerStarted","Data":"77a357fb6650a9dafbfed58aa1235e55324194e7ac94b81225e205d43eae5a0b"} Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.082427 4865 generic.go:334] "Generic (PLEG): container finished" podID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" containerID="e8bba2b0d16880a680c2f1646d719f787c9086585a89aecce074435b389bda88" exitCode=1 Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.082492 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" event={"ID":"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73","Type":"ContainerDied","Data":"e8bba2b0d16880a680c2f1646d719f787c9086585a89aecce074435b389bda88"} Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.083376 4865 scope.go:117] "RemoveContainer" containerID="e8bba2b0d16880a680c2f1646d719f787c9086585a89aecce074435b389bda88" Jan 23 12:59:05 crc kubenswrapper[4865]: E0123 12:59:05.083751 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.086486 4865 generic.go:334] "Generic (PLEG): container finished" podID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerID="d73a1b61243fadfa9f65fa4fdf2278989e76adf6f98cb19d257fa9e32b8d1db3" exitCode=1 Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.086560 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" event={"ID":"bdf8f14b-af0d-43cc-b624-7dab2879dc4b","Type":"ContainerDied","Data":"d73a1b61243fadfa9f65fa4fdf2278989e76adf6f98cb19d257fa9e32b8d1db3"} Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.087064 4865 scope.go:117] "RemoveContainer" containerID="d73a1b61243fadfa9f65fa4fdf2278989e76adf6f98cb19d257fa9e32b8d1db3" Jan 23 12:59:05 crc kubenswrapper[4865]: E0123 12:59:05.087335 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-69cf5d4557-9jp5b_openstack-operators(bdf8f14b-af0d-43cc-b624-7dab2879dc4b)\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.089496 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6497cbfbf6-fkmfr_60877fc9-78f8-4298-8104-8cd90e28d3bd/route-controller-manager/1.log" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.090546 4865 generic.go:334] "Generic (PLEG): container finished" podID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerID="531e111fafa26b694fc58ed92230d273fd823d8287a4b5fa1ee16877358fe461" exitCode=255 Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.090638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" event={"ID":"60877fc9-78f8-4298-8104-8cd90e28d3bd","Type":"ContainerDied","Data":"531e111fafa26b694fc58ed92230d273fd823d8287a4b5fa1ee16877358fe461"} Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.091076 4865 scope.go:117] "RemoveContainer" containerID="531e111fafa26b694fc58ed92230d273fd823d8287a4b5fa1ee16877358fe461" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.095666 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" event={"ID":"15434cef-8cb6-4386-b761-143f1819cac8","Type":"ContainerStarted","Data":"ad7b58da97aea8e81a4fc78a113fc65a47b6ef7c45356cb1e725a3ba71c07b61"} Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.098254 4865 generic.go:334] "Generic (PLEG): container finished" podID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerID="6cf78a2b1d1d694c28eefbe3bd33e8e63f79645642989bd83eeeaa0c3233b15d" exitCode=1 Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.098322 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" event={"ID":"6d4fbfc8-900e-4c44-a458-039d37a6dd40","Type":"ContainerDied","Data":"6cf78a2b1d1d694c28eefbe3bd33e8e63f79645642989bd83eeeaa0c3233b15d"} Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.098934 4865 scope.go:117] "RemoveContainer" containerID="6cf78a2b1d1d694c28eefbe3bd33e8e63f79645642989bd83eeeaa0c3233b15d" Jan 23 12:59:05 crc kubenswrapper[4865]: E0123 12:59:05.099163 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.105325 4865 generic.go:334] "Generic (PLEG): container finished" podID="9faffae5-73bb-4980-8092-b79a6888476d" containerID="1f97d13aa3ce86a1d2a02f51ffbd89b438cecc4f57a86a864f771252de8c9b3f" exitCode=1 Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.105371 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerDied","Data":"1f97d13aa3ce86a1d2a02f51ffbd89b438cecc4f57a86a864f771252de8c9b3f"} Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.107522 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" event={"ID":"a9bb243e-e7c3-4f68-be35-d86fa049c570","Type":"ContainerStarted","Data":"fa0a27d3b1895affcd782683e375b5e4c30ad4f54a335fd33201c9ac60a1485b"} Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.111357 4865 generic.go:334] "Generic (PLEG): container finished" podID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerID="07df3a1af2b9bce0fd1cfcba17c0038c1597e26cafddd2b98e53aadb8fdae6e7" exitCode=1 Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.111394 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-szb9h" event={"ID":"3dee20a9-c14d-4a42-afb1-87d126996c56","Type":"ContainerDied","Data":"07df3a1af2b9bce0fd1cfcba17c0038c1597e26cafddd2b98e53aadb8fdae6e7"} Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.117637 4865 scope.go:117] "RemoveContainer" containerID="10cd8fab139cf2e40a506f73cefcdd6e86a95345cfc9fa18668937771bceec47" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.118202 4865 scope.go:117] "RemoveContainer" containerID="9d6ae669bfddc36d31dd962756be7302311916cffdc4eaaa27943b7ba6a5ee53" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.118470 4865 scope.go:117] "RemoveContainer" containerID="f6d9b4b3d5c12dd18a1e548634a2f2a1a036af095d890d878feed5bd34197f18" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.197973 4865 scope.go:117] "RemoveContainer" containerID="d9147b8bace7855a843e97a1bac103beaa6d491e6eb97174767cc7a9b715c786" Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.776005 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:05 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:05 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:05 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:05 crc kubenswrapper[4865]: I0123 12:59:05.776413 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.151878 4865 scope.go:117] "RemoveContainer" containerID="bd719900c8142a7d20c2f2d0218496dbcd37cde9dab823d7260847f6749c0bcb" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.205753 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/3.log" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.206776 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/2.log" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.206831 4865 generic.go:334] "Generic (PLEG): container finished" podID="582f83b4-97dc-4f56-9879-c73fab80488a" containerID="77a357fb6650a9dafbfed58aa1235e55324194e7ac94b81225e205d43eae5a0b" exitCode=1 Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.222146 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/2.log" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.249807 4865 generic.go:334] "Generic (PLEG): container finished" podID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerID="59d3de8a13e8835c1ed76b4149d3cd990bce2009bed5df37c5208d95bb6ad7ef" exitCode=255 Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.280365 4865 scope.go:117] "RemoveContainer" containerID="06479c01739b297256807e4542768173edc9ad564760ed9cd9a0c5e8b7c8e232" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.284152 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerDied","Data":"77a357fb6650a9dafbfed58aa1235e55324194e7ac94b81225e205d43eae5a0b"} Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.284183 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" event={"ID":"843c383b-053f-42f5-88ce-7a216f5354a3","Type":"ContainerStarted","Data":"372633d6c1629ea3c133a035a42459ad9888e2416153cf6dc5a143ce8246eb5e"} Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.284199 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerDied","Data":"59d3de8a13e8835c1ed76b4149d3cd990bce2009bed5df37c5208d95bb6ad7ef"} Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.284213 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.284223 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" event={"ID":"429b62c2-b748-40b1-b00f-a1b0488fc5d0","Type":"ContainerStarted","Data":"7b463c323817605cac9ed7177dc178ce724f86a06760f71e1fb716d423771420"} Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.284232 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.284242 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.284251 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.284753 4865 scope.go:117] "RemoveContainer" containerID="60deb2e894053495e810e7cdb7878c53f79f0b4b14c436447990b3d38be4649d" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.284936 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=designate-operator-controller-manager-b45d7bf98-4c94z_openstack-operators(10627175-8e39-4799-bec7-c0b49b938a29)\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.285196 4865 scope.go:117] "RemoveContainer" containerID="f580a6a63b6cd64621bf00584c28b619db653ef30817009688d5a3033aaf33c6" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.285581 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=barbican-operator-controller-manager-59dd8b7cbf-nppmq_openstack-operators(5fb13a32-67c3-46b1-a0b8-573e941e6c7e)\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.286501 4865 scope.go:117] "RemoveContainer" containerID="77a357fb6650a9dafbfed58aa1235e55324194e7ac94b81225e205d43eae5a0b" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.286867 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.288354 4865 scope.go:117] "RemoveContainer" containerID="59d3de8a13e8835c1ed76b4149d3cd990bce2009bed5df37c5208d95bb6ad7ef" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.288756 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=csi-provisioner pod=csi-hostpathplugin-g7l9x_hostpath-provisioner(f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb)\"" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.293032 4865 generic.go:334] "Generic (PLEG): container finished" podID="a9bb243e-e7c3-4f68-be35-d86fa049c570" containerID="fa0a27d3b1895affcd782683e375b5e4c30ad4f54a335fd33201c9ac60a1485b" exitCode=1 Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.293166 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" event={"ID":"a9bb243e-e7c3-4f68-be35-d86fa049c570","Type":"ContainerDied","Data":"fa0a27d3b1895affcd782683e375b5e4c30ad4f54a335fd33201c9ac60a1485b"} Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.293509 4865 scope.go:117] "RemoveContainer" containerID="fa0a27d3b1895affcd782683e375b5e4c30ad4f54a335fd33201c9ac60a1485b" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.293758 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.298756 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6497cbfbf6-fkmfr_60877fc9-78f8-4298-8104-8cd90e28d3bd/route-controller-manager/1.log" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.302548 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" event={"ID":"60877fc9-78f8-4298-8104-8cd90e28d3bd","Type":"ContainerStarted","Data":"9a4139dfc969c2097fac96a7889d206938f804f3ca29b28428ed6f6ee614103d"} Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.303430 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.316874 4865 generic.go:334] "Generic (PLEG): container finished" podID="15434cef-8cb6-4386-b761-143f1819cac8" containerID="ad7b58da97aea8e81a4fc78a113fc65a47b6ef7c45356cb1e725a3ba71c07b61" exitCode=1 Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.318061 4865 scope.go:117] "RemoveContainer" containerID="ad7b58da97aea8e81a4fc78a113fc65a47b6ef7c45356cb1e725a3ba71c07b61" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.318307 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-cf98fcc89-7kqtt_cert-manager(15434cef-8cb6-4386-b761-143f1819cac8)\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" podUID="15434cef-8cb6-4386-b761-143f1819cac8" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.318351 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" event={"ID":"15434cef-8cb6-4386-b761-143f1819cac8","Type":"ContainerDied","Data":"ad7b58da97aea8e81a4fc78a113fc65a47b6ef7c45356cb1e725a3ba71c07b61"} Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.323452 4865 scope.go:117] "RemoveContainer" containerID="d73a1b61243fadfa9f65fa4fdf2278989e76adf6f98cb19d257fa9e32b8d1db3" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.325150 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-69cf5d4557-9jp5b_openstack-operators(bdf8f14b-af0d-43cc-b624-7dab2879dc4b)\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.338530 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.340390 4865 scope.go:117] "RemoveContainer" containerID="0c6031c52e32ce61747d89289bc49fa5f3b122eca73c4bd9f57be015ec527eb9" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.340657 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.350687 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.351373 4865 scope.go:117] "RemoveContainer" containerID="b8e261b2f93c481bba3b0f111f268ed851bd5a73ba1244cfab21e04a3b5bcad8" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.351579 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da)\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.383153 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.383698 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.385452 4865 scope.go:117] "RemoveContainer" containerID="cdfd3ac5584103b22c2a79a31ed0095107b482a2f21d9168033a89df7eff77ee" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.385830 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.552650 4865 scope.go:117] "RemoveContainer" containerID="43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.618855 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.619739 4865 scope.go:117] "RemoveContainer" containerID="ec6b92229dbc3dc459ac92cd5bff829cdf79f412c7047ece466b803430a755e2" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.620079 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.700581 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.701569 4865 scope.go:117] "RemoveContainer" containerID="725cbb8bdc789381556fc95b10f61f4454ce204d5e88b36b62daaf100a191610" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.701970 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166)\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.743953 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.744892 4865 scope.go:117] "RemoveContainer" containerID="32ae1b4369c9079abde65d5f4e4fa0adee9c6e4bc077842197a00acdec6f66a3" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.745144 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.775049 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:06 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:06 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:06 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.775100 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.779084 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.779118 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.821821 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.845029 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.846188 4865 scope.go:117] "RemoveContainer" containerID="b29868d5a9978529bde12d6e5328ff0f7fb4c7425a6fae7ca2cf9640eba7d400" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.846519 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.846705 4865 scope.go:117] "RemoveContainer" containerID="7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.866025 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.866842 4865 scope.go:117] "RemoveContainer" containerID="6cf78a2b1d1d694c28eefbe3bd33e8e63f79645642989bd83eeeaa0c3233b15d" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.867144 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.920829 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.922459 4865 scope.go:117] "RemoveContainer" containerID="308dca9067f380a5a0d9f4213ded0cec44fafe37706380068a2a22ca270c04ba" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.922801 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.940653 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:59:06 crc kubenswrapper[4865]: I0123 12:59:06.942016 4865 scope.go:117] "RemoveContainer" containerID="e8bba2b0d16880a680c2f1646d719f787c9086585a89aecce074435b389bda88" Jan 23 12:59:06 crc kubenswrapper[4865]: E0123 12:59:06.942652 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.001925 4865 scope.go:117] "RemoveContainer" containerID="9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.051456 4865 scope.go:117] "RemoveContainer" containerID="7b28b314a62253bab2dff6dd6dcdbd4bdcfa958e016c67c5de24c34342098c1a" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.094176 4865 scope.go:117] "RemoveContainer" containerID="589ec817fa33d2945a77636e47bece82f5412244ceb74275ac90da7c251be8f4" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.118280 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.118525 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.164572 4865 scope.go:117] "RemoveContainer" containerID="6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.254877 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.255539 4865 scope.go:117] "RemoveContainer" containerID="e1228c3b7d8949233ea788cf4a405373f71908b01d266909368fcf0063fd8746" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.269147 4865 scope.go:117] "RemoveContainer" containerID="ffee0a65d3a9d4aaf1aaaa4f2d0daee9888f2045360ad40a337ab9bdd0bd24ba" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.296068 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.297016 4865 scope.go:117] "RemoveContainer" containerID="6db42aed1c07ce277ecd3b8215a67495dfaf2bf15960b45ae32504ccb5fd0d52" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.297238 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba)\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.300034 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.300837 4865 scope.go:117] "RemoveContainer" containerID="c8e8f994678810f599e85ee5892ed48a42135387b679873c1a02e57882bacccd" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.301110 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.303529 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.303671 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.335166 4865 generic.go:334] "Generic (PLEG): container finished" podID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerID="7b463c323817605cac9ed7177dc178ce724f86a06760f71e1fb716d423771420" exitCode=1 Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.335251 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" event={"ID":"429b62c2-b748-40b1-b00f-a1b0488fc5d0","Type":"ContainerDied","Data":"7b463c323817605cac9ed7177dc178ce724f86a06760f71e1fb716d423771420"} Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.335773 4865 scope.go:117] "RemoveContainer" containerID="7b463c323817605cac9ed7177dc178ce724f86a06760f71e1fb716d423771420" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.336033 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=neutron-operator-controller-manager-5d8f59fb49-hnv8g_openstack-operators(429b62c2-b748-40b1-b00f-a1b0488fc5d0)\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.339613 4865 generic.go:334] "Generic (PLEG): container finished" podID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerID="0a4a8aa869e7f3eeccf0bc0dfd13644dd147d087e939875c4f39814cdc0c5169" exitCode=1 Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.339677 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerDied","Data":"0a4a8aa869e7f3eeccf0bc0dfd13644dd147d087e939875c4f39814cdc0c5169"} Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.340272 4865 scope.go:117] "RemoveContainer" containerID="0a4a8aa869e7f3eeccf0bc0dfd13644dd147d087e939875c4f39814cdc0c5169" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.340478 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=frr-k8s-webhook-server pod=frr-k8s-webhook-server-7df86c4f6c-dkvk4_metallb-system(4116044f-0cc3-41fb-9f26-536213e1dfa3)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.351566 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"c1fccdff35bce6869db28dae53682a3777098670187d98b7b6c30ee3e2b62d82"} Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.352531 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.376661 4865 scope.go:117] "RemoveContainer" containerID="59d3de8a13e8835c1ed76b4149d3cd990bce2009bed5df37c5208d95bb6ad7ef" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.377183 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=csi-provisioner pod=csi-hostpathplugin-g7l9x_hostpath-provisioner(f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb)\"" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.382796 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-szb9h" event={"ID":"3dee20a9-c14d-4a42-afb1-87d126996c56","Type":"ContainerStarted","Data":"7090bd800d91f637ccaa1100f5ceee8639300992d193dba3e886280899e7ce41"} Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.382891 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-szb9h" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.391132 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/3.log" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.391698 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/2.log" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.391751 4865 generic.go:334] "Generic (PLEG): container finished" podID="843c383b-053f-42f5-88ce-7a216f5354a3" containerID="372633d6c1629ea3c133a035a42459ad9888e2416153cf6dc5a143ce8246eb5e" exitCode=1 Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.392030 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" event={"ID":"843c383b-053f-42f5-88ce-7a216f5354a3","Type":"ContainerDied","Data":"372633d6c1629ea3c133a035a42459ad9888e2416153cf6dc5a143ce8246eb5e"} Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.392808 4865 scope.go:117] "RemoveContainer" containerID="372633d6c1629ea3c133a035a42459ad9888e2416153cf6dc5a143ce8246eb5e" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.393122 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=catalog-operator pod=catalog-operator-68c6474976-42cdm_openshift-operator-lifecycle-manager(843c383b-053f-42f5-88ce-7a216f5354a3)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.394046 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/2.log" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.394963 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/1.log" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.402370 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/3.log" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.403028 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/2.log" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.405170 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/1.log" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.406213 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerStarted","Data":"98657247dc1c409a5ad6e3206fa2c1f831709130bf5f95d5042d9473c85fbecf"} Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.406889 4865 scope.go:117] "RemoveContainer" containerID="98657247dc1c409a5ad6e3206fa2c1f831709130bf5f95d5042d9473c85fbecf" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.407292 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.544960 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/2.log" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.545961 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/1.log" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.546715 4865 scope.go:117] "RemoveContainer" containerID="ad7b58da97aea8e81a4fc78a113fc65a47b6ef7c45356cb1e725a3ba71c07b61" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.546958 4865 scope.go:117] "RemoveContainer" containerID="fa0a27d3b1895affcd782683e375b5e4c30ad4f54a335fd33201c9ac60a1485b" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.547222 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.546963 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-cf98fcc89-7kqtt_cert-manager(15434cef-8cb6-4386-b761-143f1819cac8)\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" podUID="15434cef-8cb6-4386-b761-143f1819cac8" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.547296 4865 scope.go:117] "RemoveContainer" containerID="77a357fb6650a9dafbfed58aa1235e55324194e7ac94b81225e205d43eae5a0b" Jan 23 12:59:07 crc kubenswrapper[4865]: E0123 12:59:07.547523 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.551357 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.551664 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.551682 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.551707 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.580332 4865 scope.go:117] "RemoveContainer" containerID="f632a0ec27e84458e4e6a53018ba24d615fb3557c7f27191234cdc3926b8f3a4" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.774340 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:07 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:07 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:07 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.774445 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.802243 4865 scope.go:117] "RemoveContainer" containerID="e7440e71b764fc1170b4e582df1fa0de60d00e2cc4d7348e19eb5ccc39b95a74" Jan 23 12:59:07 crc kubenswrapper[4865]: I0123 12:59:07.988896 4865 scope.go:117] "RemoveContainer" containerID="23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.281023 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.287506 4865 scope.go:117] "RemoveContainer" containerID="2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.504733 4865 scope.go:117] "RemoveContainer" containerID="e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.546840 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.547000 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.561475 4865 generic.go:334] "Generic (PLEG): container finished" podID="dbfec6f5-80b4-480f-a958-c3107b2776c0" containerID="ec509e0de18f0a1eb53b6c974db80fba7ecc65f2c1424ad1321e243121b7162d" exitCode=1 Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.561553 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" event={"ID":"dbfec6f5-80b4-480f-a958-c3107b2776c0","Type":"ContainerDied","Data":"ec509e0de18f0a1eb53b6c974db80fba7ecc65f2c1424ad1321e243121b7162d"} Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.562131 4865 scope.go:117] "RemoveContainer" containerID="ec509e0de18f0a1eb53b6c974db80fba7ecc65f2c1424ad1321e243121b7162d" Jan 23 12:59:08 crc kubenswrapper[4865]: E0123 12:59:08.562359 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0)\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.574462 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/3.log" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.575170 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/2.log" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.576481 4865 generic.go:334] "Generic (PLEG): container finished" podID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerID="98657247dc1c409a5ad6e3206fa2c1f831709130bf5f95d5042d9473c85fbecf" exitCode=1 Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.576985 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerDied","Data":"98657247dc1c409a5ad6e3206fa2c1f831709130bf5f95d5042d9473c85fbecf"} Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.578336 4865 scope.go:117] "RemoveContainer" containerID="372633d6c1629ea3c133a035a42459ad9888e2416153cf6dc5a143ce8246eb5e" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.578732 4865 scope.go:117] "RemoveContainer" containerID="7b463c323817605cac9ed7177dc178ce724f86a06760f71e1fb716d423771420" Jan 23 12:59:08 crc kubenswrapper[4865]: E0123 12:59:08.578872 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=catalog-operator pod=catalog-operator-68c6474976-42cdm_openshift-operator-lifecycle-manager(843c383b-053f-42f5-88ce-7a216f5354a3)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.578931 4865 scope.go:117] "RemoveContainer" containerID="98657247dc1c409a5ad6e3206fa2c1f831709130bf5f95d5042d9473c85fbecf" Jan 23 12:59:08 crc kubenswrapper[4865]: E0123 12:59:08.578943 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=neutron-operator-controller-manager-5d8f59fb49-hnv8g_openstack-operators(429b62c2-b748-40b1-b00f-a1b0488fc5d0)\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.579190 4865 scope.go:117] "RemoveContainer" containerID="fa0a27d3b1895affcd782683e375b5e4c30ad4f54a335fd33201c9ac60a1485b" Jan 23 12:59:08 crc kubenswrapper[4865]: E0123 12:59:08.579360 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:59:08 crc kubenswrapper[4865]: E0123 12:59:08.579535 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.775896 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:08 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:08 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:08 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.775984 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.805382 4865 scope.go:117] "RemoveContainer" containerID="23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450" Jan 23 12:59:08 crc kubenswrapper[4865]: E0123 12:59:08.807049 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450\": container with ID starting with 23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450 not found: ID does not exist" containerID="23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.807086 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450"} err="failed to get container status \"23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450\": rpc error: code = NotFound desc = could not find container \"23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450\": container with ID starting with 23728a4ea3c7afe5afac7a04969ce934c2007912113defe71fa4f2a9d2bee450 not found: ID does not exist" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.807107 4865 scope.go:117] "RemoveContainer" containerID="ae1508f276d444032c118fb0b67f9e5568656f5e50cd128fa88fdc40396b41ce" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.963188 4865 scope.go:117] "RemoveContainer" containerID="9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945" Jan 23 12:59:08 crc kubenswrapper[4865]: E0123 12:59:08.963577 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945\": container with ID starting with 9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945 not found: ID does not exist" containerID="9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.963624 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945"} err="failed to get container status \"9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945\": rpc error: code = NotFound desc = could not find container \"9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945\": container with ID starting with 9e86128b56ec762626d202b54c443059cb3b129819e78d791f748ecbe8102945 not found: ID does not exist" Jan 23 12:59:08 crc kubenswrapper[4865]: I0123 12:59:08.963643 4865 scope.go:117] "RemoveContainer" containerID="b328d9b54b4bb04befe9c7fb9488ed21d048a4b9e5592f701a8c415ab5bad0a2" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.088360 4865 scope.go:117] "RemoveContainer" containerID="6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82" Jan 23 12:59:09 crc kubenswrapper[4865]: E0123 12:59:09.088722 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82\": container with ID starting with 6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82 not found: ID does not exist" containerID="6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.088753 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82"} err="failed to get container status \"6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82\": rpc error: code = NotFound desc = could not find container \"6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82\": container with ID starting with 6071b3723cd0cca08d7b9083be6c89060422a0c4a6f3fe72762e04723513ce82 not found: ID does not exist" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.088774 4865 scope.go:117] "RemoveContainer" containerID="45ea759c1c5e5541e38c656a91725ebb01f67b53b71f6d6ca75e869cf22a64ba" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.149706 4865 scope.go:117] "RemoveContainer" containerID="49ae166ad84882a23ad44fef712d0badebf178470d9efe058ee94bbc08cd4ec3" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.431374 4865 scope.go:117] "RemoveContainer" containerID="43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5" Jan 23 12:59:09 crc kubenswrapper[4865]: E0123 12:59:09.432079 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5\": container with ID starting with 43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5 not found: ID does not exist" containerID="43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.432117 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5"} err="failed to get container status \"43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5\": rpc error: code = NotFound desc = could not find container \"43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5\": container with ID starting with 43c8c72e006b1a03f413d528e0a92c273fe598723d3bea60994988d6552a84b5 not found: ID does not exist" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.432159 4865 scope.go:117] "RemoveContainer" containerID="59753f8a9ac601813cf61722fa2f680aaa9360854df772d81a25c24ca3e9ccbd" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.586769 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-8lsbn_cfe7c397-99ae-494d-a418-b0f08568f156/console-operator/1.log" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.588939 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/3.log" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.589550 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/2.log" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.593970 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/2.log" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.597936 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/2.log" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.650440 4865 scope.go:117] "RemoveContainer" containerID="fd24bf374cb93bd1ac3be24ba239a5a2297119650e90d74686695ca9642f7f88" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.774007 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:09 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:09 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:09 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.774091 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:09 crc kubenswrapper[4865]: I0123 12:59:09.926029 4865 scope.go:117] "RemoveContainer" containerID="66b146c5353ddb9b635d97c53b489328a11e843c79525d2f1a00e177a906335e" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.085401 4865 scope.go:117] "RemoveContainer" containerID="6e3b76caed0d76172727765da5704f1260f0f6ff0e355debf75064878c56078f" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.117924 4865 scope.go:117] "RemoveContainer" containerID="b0ee05bb915d1c680af03f7d037e6a8f7dc71e86ef4db97ae98ae4de6c52867a" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.118189 4865 scope.go:117] "RemoveContainer" containerID="0078995fce3d8cc8a8d15ad4adb0633f594c03d387aab557dd4bd184e8947817" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.118669 4865 scope.go:117] "RemoveContainer" containerID="1d19778092c058b0ad0247963b89ab6e7bd59aa27e7238ddba135add037d90ee" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.488963 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.489256 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.551649 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.551704 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.552018 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.553018 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"1c7f453a8aa0ae056bac8b7a278a7f7156ab8b37aff06ebff06217df84e970bb"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.553131 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" containerID="cri-o://1c7f453a8aa0ae056bac8b7a278a7f7156ab8b37aff06ebff06217df84e970bb" gracePeriod=30 Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.553317 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.553418 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.669347 4865 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-znx59 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": read tcp 10.217.0.2:40168->10.217.0.11:8443: read: connection reset by peer" start-of-body= Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.669418 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": read tcp 10.217.0.2:40168->10.217.0.11:8443: read: connection reset by peer" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.731212 4865 scope.go:117] "RemoveContainer" containerID="f6355aa5a5dace796906b30065a06ceac12ef8ccf3d9daab57b9c0896657f733" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.767169 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.784560 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:10 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:10 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:10 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.784624 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:10 crc kubenswrapper[4865]: E0123 12:59:10.810410 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-7777fb866f-znx59_openshift-config-operator(141f6171-3d39-421b-98f4-6accc5d30ae2)\"" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.828319 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nhd4g" Jan 23 12:59:10 crc kubenswrapper[4865]: I0123 12:59:10.918799 4865 scope.go:117] "RemoveContainer" containerID="a5571cd178bb438261317ba38387e608ad50d4bb004aa2d11391f7a29dd99411" Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.019071 4865 scope.go:117] "RemoveContainer" containerID="d3495ed84a53f12e4007a0f99ec4d52ea21f2cbe622e4f903c019346c6618125" Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.069397 4865 scope.go:117] "RemoveContainer" containerID="1f82bbc6562ef119de8d44283cf07658ee939e78d2e833cee725ec522543517b" Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.384913 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.625113 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/3.log" Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.625590 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/2.log" Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.626060 4865 generic.go:334] "Generic (PLEG): container finished" podID="141f6171-3d39-421b-98f4-6accc5d30ae2" containerID="1c7f453a8aa0ae056bac8b7a278a7f7156ab8b37aff06ebff06217df84e970bb" exitCode=255 Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.626096 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerDied","Data":"1c7f453a8aa0ae056bac8b7a278a7f7156ab8b37aff06ebff06217df84e970bb"} Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.626884 4865 scope.go:117] "RemoveContainer" containerID="1c7f453a8aa0ae056bac8b7a278a7f7156ab8b37aff06ebff06217df84e970bb" Jan 23 12:59:11 crc kubenswrapper[4865]: E0123 12:59:11.627149 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-7777fb866f-znx59_openshift-config-operator(141f6171-3d39-421b-98f4-6accc5d30ae2)\"" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.628982 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/2.log" Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.629047 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerStarted","Data":"faa9c6ad2fabad2d8ae1f9bd37836878ff75fb3663387a72bdc1ee93c863cd03"} Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.742523 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tqvjg" podUID="67ef4926-eb81-4d83-a9a1-4b7e9035892f" containerName="registry-server" probeResult="failure" output=< Jan 23 12:59:11 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 12:59:11 crc kubenswrapper[4865]: > Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.775355 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:11 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:11 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:11 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.775410 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.938746 4865 scope.go:117] "RemoveContainer" containerID="f3aedc3e84f03b5a8e35205c0b6b4acbbbb14f3224c2a1e020ffd763c7603f98" Jan 23 12:59:11 crc kubenswrapper[4865]: I0123 12:59:11.998655 4865 scope.go:117] "RemoveContainer" containerID="9b5e2623653f8096ac3aff4b822b14daf66396447038c4f8cf5cc198e1064fbb" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.118924 4865 scope.go:117] "RemoveContainer" containerID="53713b2e2ad169d7c03a338fa5a445d6705e8dd1da1084b2d62b6ecffc0a9f6b" Jan 23 12:59:12 crc kubenswrapper[4865]: E0123 12:59:12.119266 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-76c5c47f8f-p49qh_openstack-operators(b2ea2452-dc3b-4b93-a9d4-e562a63111c9)\"" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.125965 4865 scope.go:117] "RemoveContainer" containerID="82a5dc53e670de19adf070d15e3558500b1a04ef07b5381860ecdd360fb8e0fd" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.239888 4865 scope.go:117] "RemoveContainer" containerID="fe8e5fdd26caa016dbb63f464761d687129188d8bc9524e4503f2cdbb1d13171" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.309950 4865 scope.go:117] "RemoveContainer" containerID="8def9c24761c33c45159a6ec2ce99f5dd723a4647323c64aed86dad731e312d5" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.643182 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/3.log" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.643707 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/2.log" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.643752 4865 generic.go:334] "Generic (PLEG): container finished" podID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerID="faa9c6ad2fabad2d8ae1f9bd37836878ff75fb3663387a72bdc1ee93c863cd03" exitCode=1 Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.643809 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerDied","Data":"faa9c6ad2fabad2d8ae1f9bd37836878ff75fb3663387a72bdc1ee93c863cd03"} Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.644769 4865 scope.go:117] "RemoveContainer" containerID="faa9c6ad2fabad2d8ae1f9bd37836878ff75fb3663387a72bdc1ee93c863cd03" Jan 23 12:59:12 crc kubenswrapper[4865]: E0123 12:59:12.645027 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.658925 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/2.log" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.659049 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" event={"ID":"a51b0d26-bdc8-433f-90e5-d90b9bd94373","Type":"ContainerStarted","Data":"4bd26460c7107d596f6e82281722c13d41309f2acaeef6043fde99411265ea62"} Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.660125 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.661664 4865 patch_prober.go:28] interesting pod/oauth-openshift-7489ccbc46-6gcbp container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" start-of-body= Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.661699 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.62:6443/healthz\": dial tcp 10.217.0.62:6443: connect: connection refused" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.668954 4865 generic.go:334] "Generic (PLEG): container finished" podID="a332d40d-1d78-4d9d-b768-b988654c732a" containerID="8cf4698cdb0957f903144e968b184805d998fc4db6eb44b4ecb51ac27de605f1" exitCode=1 Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.669075 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mbdcq" event={"ID":"a332d40d-1d78-4d9d-b768-b988654c732a","Type":"ContainerDied","Data":"8cf4698cdb0957f903144e968b184805d998fc4db6eb44b4ecb51ac27de605f1"} Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.670352 4865 scope.go:117] "RemoveContainer" containerID="8cf4698cdb0957f903144e968b184805d998fc4db6eb44b4ecb51ac27de605f1" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.679113 4865 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-hrzcb container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.679170 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" podUID="c8896518-4b5b-4712-9994-0bb445a3504f" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.680952 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6497cbfbf6-fkmfr_60877fc9-78f8-4298-8104-8cd90e28d3bd/route-controller-manager/1.log" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.693328 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-spv64" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.704186 4865 generic.go:334] "Generic (PLEG): container finished" podID="4cb0a89a-49f9-4a31-9cec-669e88882018" containerID="ecc11a6b708e2ecc2bab9400c87b0ae20de964fb884bde7f72ea4d00a7271726" exitCode=1 Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.704257 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cb0a89a-49f9-4a31-9cec-669e88882018","Type":"ContainerDied","Data":"ecc11a6b708e2ecc2bab9400c87b0ae20de964fb884bde7f72ea4d00a7271726"} Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.704908 4865 scope.go:117] "RemoveContainer" containerID="ecc11a6b708e2ecc2bab9400c87b0ae20de964fb884bde7f72ea4d00a7271726" Jan 23 12:59:12 crc kubenswrapper[4865]: E0123 12:59:12.705173 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(4cb0a89a-49f9-4a31-9cec-669e88882018)\"" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.705683 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hz4vm" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.706667 4865 scope.go:117] "RemoveContainer" containerID="d6acb080922eeaa2369550b183fe958e11a57932e047f282f28c4fa5f378419b" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.737277 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.737341 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.737285 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.737506 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.765425 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.776549 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:12 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:12 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:12 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.776590 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:12 crc kubenswrapper[4865]: I0123 12:59:12.822052 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hh6cp" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.118091 4865 scope.go:117] "RemoveContainer" containerID="1bea2df943dd4b2320c01129fc3e1f605f1e54d306d39726538cd5cb68181c29" Jan 23 12:59:13 crc kubenswrapper[4865]: E0123 12:59:13.118307 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=webhook-server pod=metallb-operator-webhook-server-78f5776895-s7hqg_metallb-system(9177b0d0-3ce7-40fe-8567-85cb8dd5227a)\"" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.178871 4865 scope.go:117] "RemoveContainer" containerID="aabab20e981a150a139733adbff53f4aa6231b20440051727cbd214182f0f5a1" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.201791 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.268326 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qwxxg" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.290159 4865 scope.go:117] "RemoveContainer" containerID="a731dd1f940b77d2d471bc77ff3834d9ed1c0aaa7dc63e059f42afa2cba767ec" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.301686 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.369777 4865 scope.go:117] "RemoveContainer" containerID="d1dcaba699a08e73d448a396063bd12ecc6334242e3ffa33fd02a518ec5c09fe" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.437902 4865 scope.go:117] "RemoveContainer" containerID="7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c" Jan 23 12:59:13 crc kubenswrapper[4865]: E0123 12:59:13.438408 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c\": container with ID starting with 7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c not found: ID does not exist" containerID="7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.438446 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c"} err="failed to get container status \"7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c\": rpc error: code = NotFound desc = could not find container \"7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c\": container with ID starting with 7a8b303281406219d3fed2857d76782749a3324884e609f59f7a8a8b915f3d8c not found: ID does not exist" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.438472 4865 scope.go:117] "RemoveContainer" containerID="225e8ea119b89ec53412b288a78504658e10536d73422bdffe3ca05d7a7e6596" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.596423 4865 scope.go:117] "RemoveContainer" containerID="cb91c750d12981120827f7b542090517af3ea3a9ede28ff7cd23321b1eb4911e" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.725514 4865 scope.go:117] "RemoveContainer" containerID="2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4" Jan 23 12:59:13 crc kubenswrapper[4865]: E0123 12:59:13.726520 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4\": container with ID starting with 2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4 not found: ID does not exist" containerID="2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.726545 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4"} err="failed to get container status \"2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4\": rpc error: code = NotFound desc = could not find container \"2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4\": container with ID starting with 2b39a1d8fdfece58e81f0f92c6ffd878d37ee7b14cf88113481fff0e11933ce4 not found: ID does not exist" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.726568 4865 scope.go:117] "RemoveContainer" containerID="f6d9b4b3d5c12dd18a1e548634a2f2a1a036af095d890d878feed5bd34197f18" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.727888 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/3.log" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.744480 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-mbdcq" event={"ID":"a332d40d-1d78-4d9d-b768-b988654c732a","Type":"ContainerStarted","Data":"20ee4b1150ce1e8aebf695622f2e71586b4d8d2fd7a399b33c4368d896adf645"} Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.777940 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:13 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:13 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:13 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.778042 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.792375 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.793861 4865 scope.go:117] "RemoveContainer" containerID="0a4a8aa869e7f3eeccf0bc0dfd13644dd147d087e939875c4f39814cdc0c5169" Jan 23 12:59:13 crc kubenswrapper[4865]: E0123 12:59:13.794284 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=frr-k8s-webhook-server pod=frr-k8s-webhook-server-7df86c4f6c-dkvk4_metallb-system(4116044f-0cc3-41fb-9f26-536213e1dfa3)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.793865 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.849118 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.849167 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.849901 4865 scope.go:117] "RemoveContainer" containerID="372633d6c1629ea3c133a035a42459ad9888e2416153cf6dc5a143ce8246eb5e" Jan 23 12:59:13 crc kubenswrapper[4865]: E0123 12:59:13.850151 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=catalog-operator pod=catalog-operator-68c6474976-42cdm_openshift-operator-lifecycle-manager(843c383b-053f-42f5-88ce-7a216f5354a3)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.860530 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/3.log" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.861342 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/2.log" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.861387 4865 generic.go:334] "Generic (PLEG): container finished" podID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" containerID="4bd26460c7107d596f6e82281722c13d41309f2acaeef6043fde99411265ea62" exitCode=255 Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.861439 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" event={"ID":"a51b0d26-bdc8-433f-90e5-d90b9bd94373","Type":"ContainerDied","Data":"4bd26460c7107d596f6e82281722c13d41309f2acaeef6043fde99411265ea62"} Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.868833 4865 scope.go:117] "RemoveContainer" containerID="4bd26460c7107d596f6e82281722c13d41309f2acaeef6043fde99411265ea62" Jan 23 12:59:13 crc kubenswrapper[4865]: E0123 12:59:13.869499 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-7489ccbc46-6gcbp_openshift-authentication(a51b0d26-bdc8-433f-90e5-d90b9bd94373)\"" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.879276 4865 scope.go:117] "RemoveContainer" containerID="faa9c6ad2fabad2d8ae1f9bd37836878ff75fb3663387a72bdc1ee93c863cd03" Jan 23 12:59:13 crc kubenswrapper[4865]: E0123 12:59:13.879920 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.883564 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.884733 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.885521 4865 scope.go:117] "RemoveContainer" containerID="77a357fb6650a9dafbfed58aa1235e55324194e7ac94b81225e205d43eae5a0b" Jan 23 12:59:13 crc kubenswrapper[4865]: E0123 12:59:13.886115 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.898110 4865 scope.go:117] "RemoveContainer" containerID="9d6ae669bfddc36d31dd962756be7302311916cffdc4eaaa27943b7ba6a5ee53" Jan 23 12:59:13 crc kubenswrapper[4865]: I0123 12:59:13.970344 4865 scope.go:117] "RemoveContainer" containerID="10cd8fab139cf2e40a506f73cefcdd6e86a95345cfc9fa18668937771bceec47" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.008986 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.030305 4865 scope.go:117] "RemoveContainer" containerID="e1228c3b7d8949233ea788cf4a405373f71908b01d266909368fcf0063fd8746" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.118692 4865 scope.go:117] "RemoveContainer" containerID="19590472562d768b58a36819b0839df5422b20ecc8e2438bd400797b00c548e4" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.119037 4865 scope.go:117] "RemoveContainer" containerID="548856cde1055956756c947adf57d4b5401f31359b6a9014ce3c9d05d88051cf" Jan 23 12:59:14 crc kubenswrapper[4865]: E0123 12:59:14.119520 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=controller pod=controller-6968d8fdc4-8bjkz_metallb-system(3685d2b2-151b-479a-92c1-ae400eacd1b9)\"" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.166665 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.167308 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.170372 4865 scope.go:117] "RemoveContainer" containerID="98657247dc1c409a5ad6e3206fa2c1f831709130bf5f95d5042d9473c85fbecf" Jan 23 12:59:14 crc kubenswrapper[4865]: E0123 12:59:14.170660 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.188104 4865 scope.go:117] "RemoveContainer" containerID="06479c01739b297256807e4542768173edc9ad564760ed9cd9a0c5e8b7c8e232" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.291184 4865 scope.go:117] "RemoveContainer" containerID="e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22" Jan 23 12:59:14 crc kubenswrapper[4865]: E0123 12:59:14.291541 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22\": container with ID starting with e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22 not found: ID does not exist" containerID="e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.291668 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22"} err="failed to get container status \"e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22\": rpc error: code = NotFound desc = could not find container \"e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22\": container with ID starting with e41ad3aa2d0125fd3b14924ff425d3b2c772db7a716e445afe9b35d5c3e6be22 not found: ID does not exist" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.291705 4865 scope.go:117] "RemoveContainer" containerID="da789a528559c31f8bf0e20e446bbe2e404c5e09244ced0365c858057a65f55a" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.356116 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.356458 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.359492 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"d4e6d818c5e068d51936524f311b0a4ea0a416bc4ab3fabeff119dbfad8a049e"} pod="openstack/horizon-66f7b94cdb-f7pw2" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.359727 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" containerID="cri-o://d4e6d818c5e068d51936524f311b0a4ea0a416bc4ab3fabeff119dbfad8a049e" gracePeriod=30 Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.440548 4865 scope.go:117] "RemoveContainer" containerID="b0ee05bb915d1c680af03f7d037e6a8f7dc71e86ef4db97ae98ae4de6c52867a" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.492648 4865 scope.go:117] "RemoveContainer" containerID="0078995fce3d8cc8a8d15ad4adb0633f594c03d387aab557dd4bd184e8947817" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.528241 4865 scope.go:117] "RemoveContainer" containerID="1d19778092c058b0ad0247963b89ab6e7bd59aa27e7238ddba135add037d90ee" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.778213 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:14 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:14 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:14 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.778538 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.884311 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.884358 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.884723 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.884810 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.884899 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.886489 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"a2967002ccd792bae8810b20b9e2d2cfe8625b95767e8c6b04826a95e9029999"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.886542 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" containerID="cri-o://a2967002ccd792bae8810b20b9e2d2cfe8625b95767e8c6b04826a95e9029999" gracePeriod=30 Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.887720 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/3.log" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.889111 4865 scope.go:117] "RemoveContainer" containerID="98657247dc1c409a5ad6e3206fa2c1f831709130bf5f95d5042d9473c85fbecf" Jan 23 12:59:14 crc kubenswrapper[4865]: E0123 12:59:14.889583 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.900803 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": read tcp 10.217.0.2:34140->10.217.0.38:5443: read: connection reset by peer" start-of-body= Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.901060 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": read tcp 10.217.0.2:34140->10.217.0.38:5443: read: connection reset by peer" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.901147 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/3.log" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.905789 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/3.log" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.914029 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/3.log" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.914863 4865 scope.go:117] "RemoveContainer" containerID="4bd26460c7107d596f6e82281722c13d41309f2acaeef6043fde99411265ea62" Jan 23 12:59:14 crc kubenswrapper[4865]: E0123 12:59:14.915142 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-7489ccbc46-6gcbp_openshift-authentication(a51b0d26-bdc8-433f-90e5-d90b9bd94373)\"" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.922754 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/3.log" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.925346 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" event={"ID":"8e227974-40b8-4d16-8d5f-961b705a9740","Type":"ContainerStarted","Data":"5031a9000ba13266ff563f20ed5d1051c6903c52bb120dfe5b49a8077462e6f4"} Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.926008 4865 scope.go:117] "RemoveContainer" containerID="5031a9000ba13266ff563f20ed5d1051c6903c52bb120dfe5b49a8077462e6f4" Jan 23 12:59:14 crc kubenswrapper[4865]: E0123 12:59:14.926269 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-fdkt9_openstack-operators(8e227974-40b8-4d16-8d5f-961b705a9740)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" Jan 23 12:59:14 crc kubenswrapper[4865]: I0123 12:59:14.929060 4865 scope.go:117] "RemoveContainer" containerID="0a4a8aa869e7f3eeccf0bc0dfd13644dd147d087e939875c4f39814cdc0c5169" Jan 23 12:59:14 crc kubenswrapper[4865]: E0123 12:59:14.929257 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with CrashLoopBackOff: \"back-off 20s restarting failed container=frr-k8s-webhook-server pod=frr-k8s-webhook-server-7df86c4f6c-dkvk4_metallb-system(4116044f-0cc3-41fb-9f26-536213e1dfa3)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.118495 4865 scope.go:117] "RemoveContainer" containerID="cae7350f56e93ca710d0c21c2da50413d1a8d37e184decf6367e6eecde1618f1" Jan 23 12:59:15 crc kubenswrapper[4865]: E0123 12:59:15.118742 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-l6w6d_openstack-operators(2c3366d9-565f-4601-acbb-b473dcfe126c)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.513389 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.513702 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.774921 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:15 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:15 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:15 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.774965 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.940441 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-d55dfcdfc-xwjxp_2699af1d-57a0-4ce2-9550-b423f9eafc0f/packageserver/2.log" Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.940802 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-d55dfcdfc-xwjxp_2699af1d-57a0-4ce2-9550-b423f9eafc0f/packageserver/1.log" Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.940841 4865 generic.go:334] "Generic (PLEG): container finished" podID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerID="a2967002ccd792bae8810b20b9e2d2cfe8625b95767e8c6b04826a95e9029999" exitCode=2 Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.940880 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" event={"ID":"2699af1d-57a0-4ce2-9550-b423f9eafc0f","Type":"ContainerDied","Data":"a2967002ccd792bae8810b20b9e2d2cfe8625b95767e8c6b04826a95e9029999"} Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.940903 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" event={"ID":"2699af1d-57a0-4ce2-9550-b423f9eafc0f","Type":"ContainerStarted","Data":"226952c05450dc0684fd285cb0fc4f196945fd411634b23273fedf460b7f3ee8"} Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.940918 4865 scope.go:117] "RemoveContainer" containerID="432a1a4071bbfc164b3e505f3e4dcae88a37d22aa2a060b89d9d61d60cbf9348" Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.942236 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.944425 4865 generic.go:334] "Generic (PLEG): container finished" podID="8e227974-40b8-4d16-8d5f-961b705a9740" containerID="5031a9000ba13266ff563f20ed5d1051c6903c52bb120dfe5b49a8077462e6f4" exitCode=1 Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.944518 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" event={"ID":"8e227974-40b8-4d16-8d5f-961b705a9740","Type":"ContainerDied","Data":"5031a9000ba13266ff563f20ed5d1051c6903c52bb120dfe5b49a8077462e6f4"} Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.945094 4865 scope.go:117] "RemoveContainer" containerID="4bd26460c7107d596f6e82281722c13d41309f2acaeef6043fde99411265ea62" Jan 23 12:59:15 crc kubenswrapper[4865]: E0123 12:59:15.945331 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-7489ccbc46-6gcbp_openshift-authentication(a51b0d26-bdc8-433f-90e5-d90b9bd94373)\"" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.945343 4865 scope.go:117] "RemoveContainer" containerID="5031a9000ba13266ff563f20ed5d1051c6903c52bb120dfe5b49a8077462e6f4" Jan 23 12:59:15 crc kubenswrapper[4865]: E0123 12:59:15.945557 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-fdkt9_openstack-operators(8e227974-40b8-4d16-8d5f-961b705a9740)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" Jan 23 12:59:15 crc kubenswrapper[4865]: I0123 12:59:15.982337 4865 scope.go:117] "RemoveContainer" containerID="19590472562d768b58a36819b0839df5422b20ecc8e2438bd400797b00c548e4" Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.178771 4865 scope.go:117] "RemoveContainer" containerID="c8e25097f0f83c69e0e2913b1e2a64e7f61dc0ac6daed1525a600f59e05e5e02" Jan 23 12:59:16 crc kubenswrapper[4865]: E0123 12:59:16.179275 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-7df9698d5d-lk94b_metallb-system(d1a0503d-3fc4-45b6-87c0-7af4a7246a4b)\"" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.398047 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.628907 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.629461 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.630192 4865 scope.go:117] "RemoveContainer" containerID="ecc11a6b708e2ecc2bab9400c87b0ae20de964fb884bde7f72ea4d00a7271726" Jan 23 12:59:16 crc kubenswrapper[4865]: E0123 12:59:16.630425 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(4cb0a89a-49f9-4a31-9cec-669e88882018)\"" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.774458 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:16 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:16 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:16 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.774505 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.822075 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.822743 4865 scope.go:117] "RemoveContainer" containerID="7b463c323817605cac9ed7177dc178ce724f86a06760f71e1fb716d423771420" Jan 23 12:59:16 crc kubenswrapper[4865]: E0123 12:59:16.822953 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=neutron-operator-controller-manager-5d8f59fb49-hnv8g_openstack-operators(429b62c2-b748-40b1-b00f-a1b0488fc5d0)\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.943032 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.943103 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:16 crc kubenswrapper[4865]: I0123 12:59:16.954096 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-d55dfcdfc-xwjxp_2699af1d-57a0-4ce2-9550-b423f9eafc0f/packageserver/2.log" Jan 23 12:59:17 crc kubenswrapper[4865]: I0123 12:59:17.118405 4865 scope.go:117] "RemoveContainer" containerID="0c6031c52e32ce61747d89289bc49fa5f3b122eca73c4bd9f57be015ec527eb9" Jan 23 12:59:17 crc kubenswrapper[4865]: I0123 12:59:17.118558 4865 scope.go:117] "RemoveContainer" containerID="f580a6a63b6cd64621bf00584c28b619db653ef30817009688d5a3033aaf33c6" Jan 23 12:59:17 crc kubenswrapper[4865]: E0123 12:59:17.118691 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:59:17 crc kubenswrapper[4865]: I0123 12:59:17.255340 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:59:17 crc kubenswrapper[4865]: I0123 12:59:17.256267 4865 scope.go:117] "RemoveContainer" containerID="ec509e0de18f0a1eb53b6c974db80fba7ecc65f2c1424ad1321e243121b7162d" Jan 23 12:59:17 crc kubenswrapper[4865]: E0123 12:59:17.256476 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0)\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:59:17 crc kubenswrapper[4865]: I0123 12:59:17.775323 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:17 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:17 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:17 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:17 crc kubenswrapper[4865]: I0123 12:59:17.775372 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:17 crc kubenswrapper[4865]: I0123 12:59:17.955253 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:17 crc kubenswrapper[4865]: I0123 12:59:17.955323 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:17 crc kubenswrapper[4865]: I0123 12:59:17.966502 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" event={"ID":"5fb13a32-67c3-46b1-a0b8-573e941e6c7e","Type":"ContainerStarted","Data":"820b838e41932b51e492781e46a62aba87e5bd3fe1ad917198cfd13ce68996af"} Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.123706 4865 scope.go:117] "RemoveContainer" containerID="60deb2e894053495e810e7cdb7878c53f79f0b4b14c436447990b3d38be4649d" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.284799 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.358474 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.513717 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.587047 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.685752 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.686759 4865 scope.go:117] "RemoveContainer" containerID="faa9c6ad2fabad2d8ae1f9bd37836878ff75fb3663387a72bdc1ee93c863cd03" Jan 23 12:59:18 crc kubenswrapper[4865]: E0123 12:59:18.687072 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.687298 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.775231 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:18 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:18 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:18 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.775301 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.977763 4865 generic.go:334] "Generic (PLEG): container finished" podID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" containerID="820b838e41932b51e492781e46a62aba87e5bd3fe1ad917198cfd13ce68996af" exitCode=1 Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.977878 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" event={"ID":"5fb13a32-67c3-46b1-a0b8-573e941e6c7e","Type":"ContainerDied","Data":"820b838e41932b51e492781e46a62aba87e5bd3fe1ad917198cfd13ce68996af"} Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.977929 4865 scope.go:117] "RemoveContainer" containerID="f580a6a63b6cd64621bf00584c28b619db653ef30817009688d5a3033aaf33c6" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.978533 4865 scope.go:117] "RemoveContainer" containerID="820b838e41932b51e492781e46a62aba87e5bd3fe1ad917198cfd13ce68996af" Jan 23 12:59:18 crc kubenswrapper[4865]: E0123 12:59:18.978898 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=barbican-operator-controller-manager-59dd8b7cbf-nppmq_openstack-operators(5fb13a32-67c3-46b1-a0b8-573e941e6c7e)\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.980477 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-69f744f599-hrzcb_c8896518-4b5b-4712-9994-0bb445a3504f/authentication-operator/1.log" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.980885 4865 generic.go:334] "Generic (PLEG): container finished" podID="c8896518-4b5b-4712-9994-0bb445a3504f" containerID="33ac499f470506c966c92393cc774dff3e96da3fe666e63559c6a8d2737f9c79" exitCode=255 Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.980967 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" event={"ID":"c8896518-4b5b-4712-9994-0bb445a3504f","Type":"ContainerDied","Data":"33ac499f470506c966c92393cc774dff3e96da3fe666e63559c6a8d2737f9c79"} Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.981387 4865 scope.go:117] "RemoveContainer" containerID="33ac499f470506c966c92393cc774dff3e96da3fe666e63559c6a8d2737f9c79" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.984331 4865 generic.go:334] "Generic (PLEG): container finished" podID="10627175-8e39-4799-bec7-c0b49b938a29" containerID="bd042d4838673f9732c4e8c413fa92e5a1c5e88525bd5aeef263d3b1e9d83000" exitCode=1 Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.984719 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" event={"ID":"10627175-8e39-4799-bec7-c0b49b938a29","Type":"ContainerDied","Data":"bd042d4838673f9732c4e8c413fa92e5a1c5e88525bd5aeef263d3b1e9d83000"} Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.985005 4865 scope.go:117] "RemoveContainer" containerID="faa9c6ad2fabad2d8ae1f9bd37836878ff75fb3663387a72bdc1ee93c863cd03" Jan 23 12:59:18 crc kubenswrapper[4865]: E0123 12:59:18.985278 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:59:18 crc kubenswrapper[4865]: I0123 12:59:18.985409 4865 scope.go:117] "RemoveContainer" containerID="bd042d4838673f9732c4e8c413fa92e5a1c5e88525bd5aeef263d3b1e9d83000" Jan 23 12:59:18 crc kubenswrapper[4865]: E0123 12:59:18.985665 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=designate-operator-controller-manager-b45d7bf98-4c94z_openstack-operators(10627175-8e39-4799-bec7-c0b49b938a29)\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.039290 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.058858 4865 scope.go:117] "RemoveContainer" containerID="9e2d67f5b624196c2ca7a39eb784d04a4289c7a162f39ffd49470ee7ed4b98ed" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.118142 4865 scope.go:117] "RemoveContainer" containerID="c8e8f994678810f599e85ee5892ed48a42135387b679873c1a02e57882bacccd" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.118194 4865 scope.go:117] "RemoveContainer" containerID="6db42aed1c07ce277ecd3b8215a67495dfaf2bf15960b45ae32504ccb5fd0d52" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.118655 4865 scope.go:117] "RemoveContainer" containerID="725cbb8bdc789381556fc95b10f61f4454ce204d5e88b36b62daaf100a191610" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.151236 4865 scope.go:117] "RemoveContainer" containerID="60deb2e894053495e810e7cdb7878c53f79f0b4b14c436447990b3d38be4649d" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.401352 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-txjkp" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.596275 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-xl95m" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.774495 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:19 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:19 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:19 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.774562 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.830473 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 12:59:19 crc kubenswrapper[4865]: I0123 12:59:19.946428 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.001412 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-69f744f599-hrzcb_c8896518-4b5b-4712-9994-0bb445a3504f/authentication-operator/1.log" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.001488 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-hrzcb" event={"ID":"c8896518-4b5b-4712-9994-0bb445a3504f","Type":"ContainerStarted","Data":"9b86353d437abf386f7506e1bef3eac297bd0cbe0e58cf1d7a92397e1a19a42b"} Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.012671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" event={"ID":"661fbfd2-7d52-419a-943f-c57854d2306b","Type":"ContainerStarted","Data":"86ef6bcfe1b3263f30677005188e0a48259b6d6dcdb05efcd30f92f3a527c545"} Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.024017 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.032374 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" event={"ID":"e92ddc14-bdb6-4407-b8a3-047079030166","Type":"ContainerStarted","Data":"486a01a99c4c9f08efb4c22aa2be31250963090a89b5e968702a63403a9f6476"} Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.033141 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.037234 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" event={"ID":"8ef0fdaa-8086-467d-8106-5c6dec532dba","Type":"ContainerStarted","Data":"38c3a6ccccdf9a9b276e9ee0f9aa09c7c93bfb6ad347dc4b24a986fb1d05d602"} Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.037966 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.040180 4865 scope.go:117] "RemoveContainer" containerID="820b838e41932b51e492781e46a62aba87e5bd3fe1ad917198cfd13ce68996af" Jan 23 12:59:20 crc kubenswrapper[4865]: E0123 12:59:20.040387 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=barbican-operator-controller-manager-59dd8b7cbf-nppmq_openstack-operators(5fb13a32-67c3-46b1-a0b8-573e941e6c7e)\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.119453 4865 scope.go:117] "RemoveContainer" containerID="e8bba2b0d16880a680c2f1646d719f787c9086585a89aecce074435b389bda88" Jan 23 12:59:20 crc kubenswrapper[4865]: E0123 12:59:20.119765 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.119992 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:59:20 crc kubenswrapper[4865]: E0123 12:59:20.120396 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.120491 4865 scope.go:117] "RemoveContainer" containerID="59d3de8a13e8835c1ed76b4149d3cd990bce2009bed5df37c5208d95bb6ad7ef" Jan 23 12:59:20 crc kubenswrapper[4865]: E0123 12:59:20.120749 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=csi-provisioner pod=csi-hostpathplugin-g7l9x_hostpath-provisioner(f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb)\"" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.120836 4865 scope.go:117] "RemoveContainer" containerID="cdfd3ac5584103b22c2a79a31ed0095107b482a2f21d9168033a89df7eff77ee" Jan 23 12:59:20 crc kubenswrapper[4865]: E0123 12:59:20.121155 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.278877 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.330396 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tqvjg" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.489397 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.489942 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.520663 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.535155 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.774202 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:20 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:20 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:20 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.774253 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.817128 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.846761 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 12:59:20 crc kubenswrapper[4865]: I0123 12:59:20.936465 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.042002 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.054516 4865 generic.go:334] "Generic (PLEG): container finished" podID="661fbfd2-7d52-419a-943f-c57854d2306b" containerID="86ef6bcfe1b3263f30677005188e0a48259b6d6dcdb05efcd30f92f3a527c545" exitCode=1 Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.054691 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" event={"ID":"661fbfd2-7d52-419a-943f-c57854d2306b","Type":"ContainerDied","Data":"86ef6bcfe1b3263f30677005188e0a48259b6d6dcdb05efcd30f92f3a527c545"} Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.054754 4865 scope.go:117] "RemoveContainer" containerID="c8e8f994678810f599e85ee5892ed48a42135387b679873c1a02e57882bacccd" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.055722 4865 scope.go:117] "RemoveContainer" containerID="86ef6bcfe1b3263f30677005188e0a48259b6d6dcdb05efcd30f92f3a527c545" Jan 23 12:59:21 crc kubenswrapper[4865]: E0123 12:59:21.056113 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.058636 4865 generic.go:334] "Generic (PLEG): container finished" podID="e92ddc14-bdb6-4407-b8a3-047079030166" containerID="486a01a99c4c9f08efb4c22aa2be31250963090a89b5e968702a63403a9f6476" exitCode=1 Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.058726 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" event={"ID":"e92ddc14-bdb6-4407-b8a3-047079030166","Type":"ContainerDied","Data":"486a01a99c4c9f08efb4c22aa2be31250963090a89b5e968702a63403a9f6476"} Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.059427 4865 scope.go:117] "RemoveContainer" containerID="486a01a99c4c9f08efb4c22aa2be31250963090a89b5e968702a63403a9f6476" Jan 23 12:59:21 crc kubenswrapper[4865]: E0123 12:59:21.059853 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166)\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.061530 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.062298 4865 generic.go:334] "Generic (PLEG): container finished" podID="8ef0fdaa-8086-467d-8106-5c6dec532dba" containerID="38c3a6ccccdf9a9b276e9ee0f9aa09c7c93bfb6ad347dc4b24a986fb1d05d602" exitCode=1 Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.062763 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" event={"ID":"8ef0fdaa-8086-467d-8106-5c6dec532dba","Type":"ContainerDied","Data":"38c3a6ccccdf9a9b276e9ee0f9aa09c7c93bfb6ad347dc4b24a986fb1d05d602"} Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.063226 4865 scope.go:117] "RemoveContainer" containerID="38c3a6ccccdf9a9b276e9ee0f9aa09c7c93bfb6ad347dc4b24a986fb1d05d602" Jan 23 12:59:21 crc kubenswrapper[4865]: E0123 12:59:21.063512 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba)\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.064144 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.072837 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nwb79" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.118947 4865 scope.go:117] "RemoveContainer" containerID="b8e261b2f93c481bba3b0f111f268ed851bd5a73ba1244cfab21e04a3b5bcad8" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.119223 4865 scope.go:117] "RemoveContainer" containerID="d73a1b61243fadfa9f65fa4fdf2278989e76adf6f98cb19d257fa9e32b8d1db3" Jan 23 12:59:21 crc kubenswrapper[4865]: E0123 12:59:21.119519 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=cinder-operator-controller-manager-69cf5d4557-9jp5b_openstack-operators(bdf8f14b-af0d-43cc-b624-7dab2879dc4b)\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.120259 4865 scope.go:117] "RemoveContainer" containerID="b29868d5a9978529bde12d6e5328ff0f7fb4c7425a6fae7ca2cf9640eba7d400" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.120262 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.120766 4865 scope.go:117] "RemoveContainer" containerID="308dca9067f380a5a0d9f4213ded0cec44fafe37706380068a2a22ca270c04ba" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.121004 4865 scope.go:117] "RemoveContainer" containerID="6cf78a2b1d1d694c28eefbe3bd33e8e63f79645642989bd83eeeaa0c3233b15d" Jan 23 12:59:21 crc kubenswrapper[4865]: E0123 12:59:21.121244 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.121864 4865 scope.go:117] "RemoveContainer" containerID="ec6b92229dbc3dc459ac92cd5bff829cdf79f412c7047ece466b803430a755e2" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.130170 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.382821 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.592866 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.643413 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.774831 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:21 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:21 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:21 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.774879 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.825225 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.851370 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.942467 4865 scope.go:117] "RemoveContainer" containerID="725cbb8bdc789381556fc95b10f61f4454ce204d5e88b36b62daaf100a191610" Jan 23 12:59:21 crc kubenswrapper[4865]: I0123 12:59:21.967867 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.038571 4865 scope.go:117] "RemoveContainer" containerID="6db42aed1c07ce277ecd3b8215a67495dfaf2bf15960b45ae32504ccb5fd0d52" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.058139 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.077573 4865 scope.go:117] "RemoveContainer" containerID="486a01a99c4c9f08efb4c22aa2be31250963090a89b5e968702a63403a9f6476" Jan 23 12:59:22 crc kubenswrapper[4865]: E0123 12:59:22.077948 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166)\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.082898 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" event={"ID":"967c3782-1bce-4145-8244-7650fe19dc22","Type":"ContainerStarted","Data":"b8fc2540ac6be036b476d713f6f32c61384132a04b49c6695b63e036120ddd4b"} Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.084346 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.088913 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" event={"ID":"1959a742-ade2-4266-9a93-e96a1b6e3908","Type":"ContainerStarted","Data":"dac04c172b4cbd023a80707128cb43258f9d6203fd72be3fda5736dc24798b27"} Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.089874 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.094114 4865 scope.go:117] "RemoveContainer" containerID="86ef6bcfe1b3263f30677005188e0a48259b6d6dcdb05efcd30f92f3a527c545" Jan 23 12:59:22 crc kubenswrapper[4865]: E0123 12:59:22.094443 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.097435 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" event={"ID":"0167f850-ba43-426a-8c56-aa171131e7da","Type":"ContainerStarted","Data":"302afc1aa2f5954d846f9b947677f513aa4903bdf8789eb58d4cbe0e85645cff"} Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.098048 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.103078 4865 scope.go:117] "RemoveContainer" containerID="38c3a6ccccdf9a9b276e9ee0f9aa09c7c93bfb6ad347dc4b24a986fb1d05d602" Jan 23 12:59:22 crc kubenswrapper[4865]: E0123 12:59:22.103345 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba)\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.103639 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" event={"ID":"93194445-a021-4960-ab82-085f13cc959d","Type":"ContainerStarted","Data":"277f7fcd7eac07257ddfbc243d0a0fd950fbd5eaf54d5812499188c9ecf72e79"} Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.104189 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.118368 4865 scope.go:117] "RemoveContainer" containerID="ad7b58da97aea8e81a4fc78a113fc65a47b6ef7c45356cb1e725a3ba71c07b61" Jan 23 12:59:22 crc kubenswrapper[4865]: E0123 12:59:22.118711 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-cf98fcc89-7kqtt_cert-manager(15434cef-8cb6-4386-b761-143f1819cac8)\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" podUID="15434cef-8cb6-4386-b761-143f1819cac8" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.121999 4865 scope.go:117] "RemoveContainer" containerID="32ae1b4369c9079abde65d5f4e4fa0adee9c6e4bc077842197a00acdec6f66a3" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.442705 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.445924 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.495012 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.605014 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.714510 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.737519 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.737585 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.737930 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.738073 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.738178 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.739054 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"b872b0e0bd043d0642329878eeb0e70a3ea40665e2b3ce5f0fe8633692775440"} pod="openshift-console-operator/console-operator-58897d9998-8lsbn" containerMessage="Container console-operator failed liveness probe, will be restarted" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.739196 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" containerID="cri-o://b872b0e0bd043d0642329878eeb0e70a3ea40665e2b3ce5f0fe8633692775440" gracePeriod=30 Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.775235 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:22 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:22 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:22 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.775622 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:22 crc kubenswrapper[4865]: I0123 12:59:22.968569 4865 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.068372 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": read tcp 10.217.0.2:37960->10.217.0.10:8443: read: connection reset by peer" start-of-body= Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.068431 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": read tcp 10.217.0.2:37960->10.217.0.10:8443: read: connection reset by peer" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.125642 4865 generic.go:334] "Generic (PLEG): container finished" podID="0167f850-ba43-426a-8c56-aa171131e7da" containerID="302afc1aa2f5954d846f9b947677f513aa4903bdf8789eb58d4cbe0e85645cff" exitCode=1 Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.125834 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" event={"ID":"0167f850-ba43-426a-8c56-aa171131e7da","Type":"ContainerDied","Data":"302afc1aa2f5954d846f9b947677f513aa4903bdf8789eb58d4cbe0e85645cff"} Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.126148 4865 scope.go:117] "RemoveContainer" containerID="b8e261b2f93c481bba3b0f111f268ed851bd5a73ba1244cfab21e04a3b5bcad8" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.129413 4865 scope.go:117] "RemoveContainer" containerID="302afc1aa2f5954d846f9b947677f513aa4903bdf8789eb58d4cbe0e85645cff" Jan 23 12:59:23 crc kubenswrapper[4865]: E0123 12:59:23.130814 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da)\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.132994 4865 generic.go:334] "Generic (PLEG): container finished" podID="93194445-a021-4960-ab82-085f13cc959d" containerID="277f7fcd7eac07257ddfbc243d0a0fd950fbd5eaf54d5812499188c9ecf72e79" exitCode=1 Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.133066 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" event={"ID":"93194445-a021-4960-ab82-085f13cc959d","Type":"ContainerDied","Data":"277f7fcd7eac07257ddfbc243d0a0fd950fbd5eaf54d5812499188c9ecf72e79"} Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.135142 4865 scope.go:117] "RemoveContainer" containerID="277f7fcd7eac07257ddfbc243d0a0fd950fbd5eaf54d5812499188c9ecf72e79" Jan 23 12:59:23 crc kubenswrapper[4865]: E0123 12:59:23.135551 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.142251 4865 generic.go:334] "Generic (PLEG): container finished" podID="967c3782-1bce-4145-8244-7650fe19dc22" containerID="b8fc2540ac6be036b476d713f6f32c61384132a04b49c6695b63e036120ddd4b" exitCode=1 Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.143029 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" event={"ID":"967c3782-1bce-4145-8244-7650fe19dc22","Type":"ContainerDied","Data":"b8fc2540ac6be036b476d713f6f32c61384132a04b49c6695b63e036120ddd4b"} Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.143515 4865 scope.go:117] "RemoveContainer" containerID="b8fc2540ac6be036b476d713f6f32c61384132a04b49c6695b63e036120ddd4b" Jan 23 12:59:23 crc kubenswrapper[4865]: E0123 12:59:23.143765 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.149939 4865 generic.go:334] "Generic (PLEG): container finished" podID="1959a742-ade2-4266-9a93-e96a1b6e3908" containerID="dac04c172b4cbd023a80707128cb43258f9d6203fd72be3fda5736dc24798b27" exitCode=1 Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.149980 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" event={"ID":"1959a742-ade2-4266-9a93-e96a1b6e3908","Type":"ContainerDied","Data":"dac04c172b4cbd023a80707128cb43258f9d6203fd72be3fda5736dc24798b27"} Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.150339 4865 scope.go:117] "RemoveContainer" containerID="dac04c172b4cbd023a80707128cb43258f9d6203fd72be3fda5736dc24798b27" Jan 23 12:59:23 crc kubenswrapper[4865]: E0123 12:59:23.150513 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.154882 4865 generic.go:334] "Generic (PLEG): container finished" podID="d2f4bfa4-63e2-418a-b52a-75d2992af596" containerID="4b92021aa738d8ce5a77d8b00baf08a60d2b581b8a7c210f45e831e89d21b25d" exitCode=1 Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.154908 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" event={"ID":"d2f4bfa4-63e2-418a-b52a-75d2992af596","Type":"ContainerDied","Data":"4b92021aa738d8ce5a77d8b00baf08a60d2b581b8a7c210f45e831e89d21b25d"} Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.155189 4865 scope.go:117] "RemoveContainer" containerID="4b92021aa738d8ce5a77d8b00baf08a60d2b581b8a7c210f45e831e89d21b25d" Jan 23 12:59:23 crc kubenswrapper[4865]: E0123 12:59:23.155373 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.187214 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-f7zrz" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.269093 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.278137 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.375748 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.381485 4865 scope.go:117] "RemoveContainer" containerID="308dca9067f380a5a0d9f4213ded0cec44fafe37706380068a2a22ca270c04ba" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.437661 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.437865 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.500852 4865 scope.go:117] "RemoveContainer" containerID="ec6b92229dbc3dc459ac92cd5bff829cdf79f412c7047ece466b803430a755e2" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.610571 4865 scope.go:117] "RemoveContainer" containerID="b29868d5a9978529bde12d6e5328ff0f7fb4c7425a6fae7ca2cf9640eba7d400" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.736201 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-799xn" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.737802 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.757476 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4dh5h" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.775245 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:23 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:23 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:23 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.775310 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.777229 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.777344 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.825337 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.859883 4865 scope.go:117] "RemoveContainer" containerID="32ae1b4369c9079abde65d5f4e4fa0adee9c6e4bc077842197a00acdec6f66a3" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.878123 4865 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.882741 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.928298 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.956157 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 12:59:23 crc kubenswrapper[4865]: I0123 12:59:23.961457 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.020902 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.114788 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.119244 4865 scope.go:117] "RemoveContainer" containerID="fa0a27d3b1895affcd782683e375b5e4c30ad4f54a335fd33201c9ac60a1485b" Jan 23 12:59:24 crc kubenswrapper[4865]: E0123 12:59:24.119779 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.165216 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-8lsbn_cfe7c397-99ae-494d-a418-b0f08568f156/console-operator/2.log" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.165858 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-8lsbn_cfe7c397-99ae-494d-a418-b0f08568f156/console-operator/1.log" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.165902 4865 generic.go:334] "Generic (PLEG): container finished" podID="cfe7c397-99ae-494d-a418-b0f08568f156" containerID="b872b0e0bd043d0642329878eeb0e70a3ea40665e2b3ce5f0fe8633692775440" exitCode=255 Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.165966 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" event={"ID":"cfe7c397-99ae-494d-a418-b0f08568f156","Type":"ContainerDied","Data":"b872b0e0bd043d0642329878eeb0e70a3ea40665e2b3ce5f0fe8633692775440"} Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.165992 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" event={"ID":"cfe7c397-99ae-494d-a418-b0f08568f156","Type":"ContainerStarted","Data":"fa9dd794a58f23a0f179f0ca7e9c3fad00f20221e2dbae58297794bfbb4fe596"} Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.166008 4865 scope.go:117] "RemoveContainer" containerID="7fa635d8424d3f66c94287f9ba0ad214fc331b0c607cab47353c96a11d4e376e" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.166678 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.170233 4865 scope.go:117] "RemoveContainer" containerID="302afc1aa2f5954d846f9b947677f513aa4903bdf8789eb58d4cbe0e85645cff" Jan 23 12:59:24 crc kubenswrapper[4865]: E0123 12:59:24.170433 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da)\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.172959 4865 scope.go:117] "RemoveContainer" containerID="277f7fcd7eac07257ddfbc243d0a0fd950fbd5eaf54d5812499188c9ecf72e79" Jan 23 12:59:24 crc kubenswrapper[4865]: E0123 12:59:24.173150 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.174258 4865 scope.go:117] "RemoveContainer" containerID="b8fc2540ac6be036b476d713f6f32c61384132a04b49c6695b63e036120ddd4b" Jan 23 12:59:24 crc kubenswrapper[4865]: E0123 12:59:24.174432 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.176862 4865 scope.go:117] "RemoveContainer" containerID="dac04c172b4cbd023a80707128cb43258f9d6203fd72be3fda5736dc24798b27" Jan 23 12:59:24 crc kubenswrapper[4865]: E0123 12:59:24.177074 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.331957 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.564115 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.630936 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.636068 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.657279 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-wqr8h" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.715976 4865 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.718391 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.776581 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:24 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:24 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:24 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.776695 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.804372 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.804627 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.873385 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.883065 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.883167 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.883590 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.883744 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.916290 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 23 12:59:24 crc kubenswrapper[4865]: I0123 12:59:24.947423 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.001093 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.045143 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.108181 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.118313 4865 scope.go:117] "RemoveContainer" containerID="1c7f453a8aa0ae056bac8b7a278a7f7156ab8b37aff06ebff06217df84e970bb" Jan 23 12:59:25 crc kubenswrapper[4865]: E0123 12:59:25.118564 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-7777fb866f-znx59_openshift-config-operator(141f6171-3d39-421b-98f4-6accc5d30ae2)\"" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" podUID="141f6171-3d39-421b-98f4-6accc5d30ae2" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.119002 4865 scope.go:117] "RemoveContainer" containerID="372633d6c1629ea3c133a035a42459ad9888e2416153cf6dc5a143ce8246eb5e" Jan 23 12:59:25 crc kubenswrapper[4865]: E0123 12:59:25.119546 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"catalog-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=catalog-operator pod=catalog-operator-68c6474976-42cdm_openshift-operator-lifecycle-manager(843c383b-053f-42f5-88ce-7a216f5354a3)\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" podUID="843c383b-053f-42f5-88ce-7a216f5354a3" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.167106 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.167503 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.193440 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-8lsbn_cfe7c397-99ae-494d-a418-b0f08568f156/console-operator/2.log" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.330895 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.370301 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.409931 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jxsb6" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.432363 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zrzvh" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.453144 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.462940 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.473969 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.511847 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.511930 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.578836 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.604308 4865 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.672963 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.706065 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.755767 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.777146 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ng7tq" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.778522 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:25 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:25 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:25 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.778917 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.861595 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.882187 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 23 12:59:25 crc kubenswrapper[4865]: I0123 12:59:25.952568 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.075408 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.125279 4865 scope.go:117] "RemoveContainer" containerID="53713b2e2ad169d7c03a338fa5a445d6705e8dd1da1084b2d62b6ecffc0a9f6b" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.125378 4865 scope.go:117] "RemoveContainer" containerID="98657247dc1c409a5ad6e3206fa2c1f831709130bf5f95d5042d9473c85fbecf" Jan 23 12:59:26 crc kubenswrapper[4865]: E0123 12:59:26.125664 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.172860 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-wpr76" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.172974 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.195886 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.195992 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.217358 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.217613 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.218283 4865 scope.go:117] "RemoveContainer" containerID="820b838e41932b51e492781e46a62aba87e5bd3fe1ad917198cfd13ce68996af" Jan 23 12:59:26 crc kubenswrapper[4865]: E0123 12:59:26.218501 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=barbican-operator-controller-manager-59dd8b7cbf-nppmq_openstack-operators(5fb13a32-67c3-46b1-a0b8-573e941e6c7e)\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.259846 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.259882 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.260361 4865 scope.go:117] "RemoveContainer" containerID="bd042d4838673f9732c4e8c413fa92e5a1c5e88525bd5aeef263d3b1e9d83000" Jan 23 12:59:26 crc kubenswrapper[4865]: E0123 12:59:26.260578 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=designate-operator-controller-manager-b45d7bf98-4c94z_openstack-operators(10627175-8e39-4799-bec7-c0b49b938a29)\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.280408 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.344342 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.348052 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.348805 4865 scope.go:117] "RemoveContainer" containerID="302afc1aa2f5954d846f9b947677f513aa4903bdf8789eb58d4cbe0e85645cff" Jan 23 12:59:26 crc kubenswrapper[4865]: E0123 12:59:26.349224 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da)\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.390160 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.396040 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.402437 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.474735 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.557843 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.615693 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dpkkq" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.619152 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.619871 4865 scope.go:117] "RemoveContainer" containerID="b8fc2540ac6be036b476d713f6f32c61384132a04b49c6695b63e036120ddd4b" Jan 23 12:59:26 crc kubenswrapper[4865]: E0123 12:59:26.620169 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.637520 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.657240 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.700438 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.701189 4865 scope.go:117] "RemoveContainer" containerID="486a01a99c4c9f08efb4c22aa2be31250963090a89b5e968702a63403a9f6476" Jan 23 12:59:26 crc kubenswrapper[4865]: E0123 12:59:26.701435 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166)\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.735555 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.744403 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.744632 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.745657 4865 scope.go:117] "RemoveContainer" containerID="4b92021aa738d8ce5a77d8b00baf08a60d2b581b8a7c210f45e831e89d21b25d" Jan 23 12:59:26 crc kubenswrapper[4865]: E0123 12:59:26.746058 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.775883 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:26 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:26 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:26 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.775953 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.821386 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.822173 4865 scope.go:117] "RemoveContainer" containerID="7b463c323817605cac9ed7177dc178ce724f86a06760f71e1fb716d423771420" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.845288 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.846060 4865 scope.go:117] "RemoveContainer" containerID="dac04c172b4cbd023a80707128cb43258f9d6203fd72be3fda5736dc24798b27" Jan 23 12:59:26 crc kubenswrapper[4865]: E0123 12:59:26.846312 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.907144 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.921016 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.922672 4865 scope.go:117] "RemoveContainer" containerID="277f7fcd7eac07257ddfbc243d0a0fd950fbd5eaf54d5812499188c9ecf72e79" Jan 23 12:59:26 crc kubenswrapper[4865]: E0123 12:59:26.923270 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.931840 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 23 12:59:26 crc kubenswrapper[4865]: I0123 12:59:26.995720 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.013098 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.063552 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.094590 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.118060 4865 scope.go:117] "RemoveContainer" containerID="ecc11a6b708e2ecc2bab9400c87b0ae20de964fb884bde7f72ea4d00a7271726" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.118173 4865 scope.go:117] "RemoveContainer" containerID="77a357fb6650a9dafbfed58aa1235e55324194e7ac94b81225e205d43eae5a0b" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.118259 4865 scope.go:117] "RemoveContainer" containerID="c8e25097f0f83c69e0e2913b1e2a64e7f61dc0ac6daed1525a600f59e05e5e02" Jan 23 12:59:27 crc kubenswrapper[4865]: E0123 12:59:27.118306 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(4cb0a89a-49f9-4a31-9cec-669e88882018)\"" pod="openstack/kube-state-metrics-0" podUID="4cb0a89a-49f9-4a31-9cec-669e88882018" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.220665 4865 generic.go:334] "Generic (PLEG): container finished" podID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" containerID="5ee6b1222e9411f014bd61cf294c67ddc555f8ce71c7f5164add3cbb62c2ca98" exitCode=1 Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.220729 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" event={"ID":"b2ea2452-dc3b-4b93-a9d4-e562a63111c9","Type":"ContainerDied","Data":"5ee6b1222e9411f014bd61cf294c67ddc555f8ce71c7f5164add3cbb62c2ca98"} Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.220810 4865 scope.go:117] "RemoveContainer" containerID="53713b2e2ad169d7c03a338fa5a445d6705e8dd1da1084b2d62b6ecffc0a9f6b" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.221395 4865 scope.go:117] "RemoveContainer" containerID="5ee6b1222e9411f014bd61cf294c67ddc555f8ce71c7f5164add3cbb62c2ca98" Jan 23 12:59:27 crc kubenswrapper[4865]: E0123 12:59:27.221866 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=openstack-operator-controller-manager-76c5c47f8f-p49qh_openstack-operators(b2ea2452-dc3b-4b93-a9d4-e562a63111c9)\"" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.227107 4865 generic.go:334] "Generic (PLEG): container finished" podID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" containerID="db5b8fe7b7736bdd3f92d1d332b37184bf21bfd7a6ea6dcadb3721bb397f71d3" exitCode=1 Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.227393 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" event={"ID":"429b62c2-b748-40b1-b00f-a1b0488fc5d0","Type":"ContainerDied","Data":"db5b8fe7b7736bdd3f92d1d332b37184bf21bfd7a6ea6dcadb3721bb397f71d3"} Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.227780 4865 scope.go:117] "RemoveContainer" containerID="4b92021aa738d8ce5a77d8b00baf08a60d2b581b8a7c210f45e831e89d21b25d" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.227872 4865 scope.go:117] "RemoveContainer" containerID="db5b8fe7b7736bdd3f92d1d332b37184bf21bfd7a6ea6dcadb3721bb397f71d3" Jan 23 12:59:27 crc kubenswrapper[4865]: E0123 12:59:27.228013 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:59:27 crc kubenswrapper[4865]: E0123 12:59:27.228177 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=neutron-operator-controller-manager-5d8f59fb49-hnv8g_openstack-operators(429b62c2-b748-40b1-b00f-a1b0488fc5d0)\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.229708 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.254719 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.255462 4865 scope.go:117] "RemoveContainer" containerID="ec509e0de18f0a1eb53b6c974db80fba7ecc65f2c1424ad1321e243121b7162d" Jan 23 12:59:27 crc kubenswrapper[4865]: E0123 12:59:27.255716 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0)\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.266362 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-wx66p" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.296153 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.297144 4865 scope.go:117] "RemoveContainer" containerID="38c3a6ccccdf9a9b276e9ee0f9aa09c7c93bfb6ad347dc4b24a986fb1d05d602" Jan 23 12:59:27 crc kubenswrapper[4865]: E0123 12:59:27.297638 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba)\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.299043 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.299823 4865 scope.go:117] "RemoveContainer" containerID="86ef6bcfe1b3263f30677005188e0a48259b6d6dcdb05efcd30f92f3a527c545" Jan 23 12:59:27 crc kubenswrapper[4865]: E0123 12:59:27.300138 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.314299 4865 scope.go:117] "RemoveContainer" containerID="7b463c323817605cac9ed7177dc178ce724f86a06760f71e1fb716d423771420" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.326728 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.494722 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.551759 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.586276 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.619489 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-66xxd" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.622541 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.691704 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.776268 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:27 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:27 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:27 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.776346 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.831328 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.850915 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.886728 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 12:59:27 crc kubenswrapper[4865]: I0123 12:59:27.904914 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.005830 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.087496 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.118568 4865 scope.go:117] "RemoveContainer" containerID="1bea2df943dd4b2320c01129fc3e1f605f1e54d306d39726538cd5cb68181c29" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.120132 4865 scope.go:117] "RemoveContainer" containerID="4bd26460c7107d596f6e82281722c13d41309f2acaeef6043fde99411265ea62" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.120273 4865 scope.go:117] "RemoveContainer" containerID="548856cde1055956756c947adf57d4b5401f31359b6a9014ce3c9d05d88051cf" Jan 23 12:59:28 crc kubenswrapper[4865]: E0123 12:59:28.120396 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-7489ccbc46-6gcbp_openshift-authentication(a51b0d26-bdc8-433f-90e5-d90b9bd94373)\"" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" podUID="a51b0d26-bdc8-433f-90e5-d90b9bd94373" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.120639 4865 scope.go:117] "RemoveContainer" containerID="cae7350f56e93ca710d0c21c2da50413d1a8d37e184decf6367e6eecde1618f1" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.213573 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.235582 4865 generic.go:334] "Generic (PLEG): container finished" podID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" containerID="5a717a28c3ab6313b6531cf15eb4cb0ab0c1f50459f10a50e3c41ba8d6a77900" exitCode=1 Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.235642 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" event={"ID":"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b","Type":"ContainerDied","Data":"5a717a28c3ab6313b6531cf15eb4cb0ab0c1f50459f10a50e3c41ba8d6a77900"} Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.235672 4865 scope.go:117] "RemoveContainer" containerID="c8e25097f0f83c69e0e2913b1e2a64e7f61dc0ac6daed1525a600f59e05e5e02" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.236361 4865 scope.go:117] "RemoveContainer" containerID="5a717a28c3ab6313b6531cf15eb4cb0ab0c1f50459f10a50e3c41ba8d6a77900" Jan 23 12:59:28 crc kubenswrapper[4865]: E0123 12:59:28.236638 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=metallb-operator-controller-manager-7df9698d5d-lk94b_metallb-system(d1a0503d-3fc4-45b6-87c0-7af4a7246a4b)\"" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.252234 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/4.log" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.252719 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/3.log" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.252769 4865 generic.go:334] "Generic (PLEG): container finished" podID="582f83b4-97dc-4f56-9879-c73fab80488a" containerID="37863c5fef2db414e26d4bc0b4d8291a7aa8d9062cb97e690a84bea171062f6d" exitCode=1 Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.252853 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerDied","Data":"37863c5fef2db414e26d4bc0b4d8291a7aa8d9062cb97e690a84bea171062f6d"} Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.253634 4865 scope.go:117] "RemoveContainer" containerID="37863c5fef2db414e26d4bc0b4d8291a7aa8d9062cb97e690a84bea171062f6d" Jan 23 12:59:28 crc kubenswrapper[4865]: E0123 12:59:28.253876 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.327028 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.327243 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.327390 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.335290 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.370648 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.489418 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.633090 4865 scope.go:117] "RemoveContainer" containerID="77a357fb6650a9dafbfed58aa1235e55324194e7ac94b81225e205d43eae5a0b" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.725897 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.766954 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 23 12:59:28 crc kubenswrapper[4865]: E0123 12:59:28.767472 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3685d2b2_151b_479a_92c1_ae400eacd1b9.slice/crio-conmon-c9af27a61d3dc1d7f7b2d575fbafd518ca1e36a8d58fe436f0ce27465d9bdc48.scope\": RecentStats: unable to find data in memory cache]" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.774897 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:28 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:28 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:28 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.774949 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.836567 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.865581 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.870509 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-9b24m" Jan 23 12:59:28 crc kubenswrapper[4865]: I0123 12:59:28.981660 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.042192 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.056262 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.118195 4865 scope.go:117] "RemoveContainer" containerID="0a4a8aa869e7f3eeccf0bc0dfd13644dd147d087e939875c4f39814cdc0c5169" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.165821 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.167212 4865 scope.go:117] "RemoveContainer" containerID="5ee6b1222e9411f014bd61cf294c67ddc555f8ce71c7f5164add3cbb62c2ca98" Jan 23 12:59:29 crc kubenswrapper[4865]: E0123 12:59:29.167498 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=openstack-operator-controller-manager-76c5c47f8f-p49qh_openstack-operators(b2ea2452-dc3b-4b93-a9d4-e562a63111c9)\"" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.189961 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.191627 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.244335 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.277021 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.278284 4865 generic.go:334] "Generic (PLEG): container finished" podID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" containerID="a662702efe9e87379f485c54ea659eea964b9ab589f691c6f62222a3e3f9537a" exitCode=1 Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.278343 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" event={"ID":"9177b0d0-3ce7-40fe-8567-85cb8dd5227a","Type":"ContainerDied","Data":"a662702efe9e87379f485c54ea659eea964b9ab589f691c6f62222a3e3f9537a"} Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.278452 4865 scope.go:117] "RemoveContainer" containerID="1bea2df943dd4b2320c01129fc3e1f605f1e54d306d39726538cd5cb68181c29" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.279198 4865 scope.go:117] "RemoveContainer" containerID="a662702efe9e87379f485c54ea659eea964b9ab589f691c6f62222a3e3f9537a" Jan 23 12:59:29 crc kubenswrapper[4865]: E0123 12:59:29.279545 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=webhook-server pod=metallb-operator-webhook-server-78f5776895-s7hqg_metallb-system(9177b0d0-3ce7-40fe-8567-85cb8dd5227a)\"" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.283260 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/4.log" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.287130 4865 generic.go:334] "Generic (PLEG): container finished" podID="3685d2b2-151b-479a-92c1-ae400eacd1b9" containerID="c9af27a61d3dc1d7f7b2d575fbafd518ca1e36a8d58fe436f0ce27465d9bdc48" exitCode=1 Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.287218 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerDied","Data":"c9af27a61d3dc1d7f7b2d575fbafd518ca1e36a8d58fe436f0ce27465d9bdc48"} Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.288344 4865 scope.go:117] "RemoveContainer" containerID="c9af27a61d3dc1d7f7b2d575fbafd518ca1e36a8d58fe436f0ce27465d9bdc48" Jan 23 12:59:29 crc kubenswrapper[4865]: E0123 12:59:29.288880 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=controller pod=controller-6968d8fdc4-8bjkz_metallb-system(3685d2b2-151b-479a-92c1-ae400eacd1b9)\"" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.291835 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.293439 4865 generic.go:334] "Generic (PLEG): container finished" podID="2c3366d9-565f-4601-acbb-b473dcfe126c" containerID="4b3f708fd26a8e6b4ad737d720f4693da3a7df553a47fa2c49c9d37673a2b97b" exitCode=1 Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.293524 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" event={"ID":"2c3366d9-565f-4601-acbb-b473dcfe126c","Type":"ContainerDied","Data":"4b3f708fd26a8e6b4ad737d720f4693da3a7df553a47fa2c49c9d37673a2b97b"} Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.294324 4865 scope.go:117] "RemoveContainer" containerID="4b3f708fd26a8e6b4ad737d720f4693da3a7df553a47fa2c49c9d37673a2b97b" Jan 23 12:59:29 crc kubenswrapper[4865]: E0123 12:59:29.294679 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-l6w6d_openstack-operators(2c3366d9-565f-4601-acbb-b473dcfe126c)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.296040 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.355386 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.383000 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.384272 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-b4zk8" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.422041 4865 scope.go:117] "RemoveContainer" containerID="548856cde1055956756c947adf57d4b5401f31359b6a9014ce3c9d05d88051cf" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.489546 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.593959 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.753686 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5fbhx" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.774185 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:29 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:29 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:29 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.774565 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.900976 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qbdsl" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.916222 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.938134 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 12:59:29 crc kubenswrapper[4865]: I0123 12:59:29.977240 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.009523 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.118827 4865 scope.go:117] "RemoveContainer" containerID="5031a9000ba13266ff563f20ed5d1051c6903c52bb120dfe5b49a8077462e6f4" Jan 23 12:59:30 crc kubenswrapper[4865]: E0123 12:59:30.119258 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-fdkt9_openstack-operators(8e227974-40b8-4d16-8d5f-961b705a9740)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.149116 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.180367 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-4dd2r" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.185641 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.206965 4865 scope.go:117] "RemoveContainer" containerID="cae7350f56e93ca710d0c21c2da50413d1a8d37e184decf6367e6eecde1618f1" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.262788 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.307096 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.322320 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerStarted","Data":"c75b7b4a0ab86df22883e2a2e433ff69599920974bf67d9880538a7572f33d8f"} Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.387360 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.387952 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.458015 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.490086 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.490146 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.676702 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.774884 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:30 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:30 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:30 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.774933 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.775249 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.777683 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.788475 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.801998 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.930386 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 23 12:59:30 crc kubenswrapper[4865]: I0123 12:59:30.964581 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.018553 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.059044 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.118558 4865 scope.go:117] "RemoveContainer" containerID="0c6031c52e32ce61747d89289bc49fa5f3b122eca73c4bd9f57be015ec527eb9" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.183303 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.280435 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.298573 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.316484 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.325214 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.343532 4865 generic.go:334] "Generic (PLEG): container finished" podID="4116044f-0cc3-41fb-9f26-536213e1dfa3" containerID="c75b7b4a0ab86df22883e2a2e433ff69599920974bf67d9880538a7572f33d8f" exitCode=1 Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.343576 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerDied","Data":"c75b7b4a0ab86df22883e2a2e433ff69599920974bf67d9880538a7572f33d8f"} Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.343645 4865 scope.go:117] "RemoveContainer" containerID="0a4a8aa869e7f3eeccf0bc0dfd13644dd147d087e939875c4f39814cdc0c5169" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.344351 4865 scope.go:117] "RemoveContainer" containerID="c75b7b4a0ab86df22883e2a2e433ff69599920974bf67d9880538a7572f33d8f" Jan 23 12:59:31 crc kubenswrapper[4865]: E0123 12:59:31.344952 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=frr-k8s-webhook-server pod=frr-k8s-webhook-server-7df86c4f6c-dkvk4_metallb-system(4116044f-0cc3-41fb-9f26-536213e1dfa3)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.373101 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.383616 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.443209 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.551826 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.656053 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.683130 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.706181 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.734202 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.753256 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hl94t" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.775107 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:31 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:31 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:31 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.775184 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.798456 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ppdm5" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.869358 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-kmspr" Jan 23 12:59:31 crc kubenswrapper[4865]: I0123 12:59:31.914829 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.009907 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.010899 4865 scope.go:117] "RemoveContainer" containerID="4b3f708fd26a8e6b4ad737d720f4693da3a7df553a47fa2c49c9d37673a2b97b" Jan 23 12:59:32 crc kubenswrapper[4865]: E0123 12:59:32.011161 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-l6w6d_openstack-operators(2c3366d9-565f-4601-acbb-b473dcfe126c)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.081821 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.118941 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:59:32 crc kubenswrapper[4865]: E0123 12:59:32.119248 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.119928 4865 scope.go:117] "RemoveContainer" containerID="faa9c6ad2fabad2d8ae1f9bd37836878ff75fb3663387a72bdc1ee93c863cd03" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.144954 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.146007 4865 scope.go:117] "RemoveContainer" containerID="5a717a28c3ab6313b6531cf15eb4cb0ab0c1f50459f10a50e3c41ba8d6a77900" Jan 23 12:59:32 crc kubenswrapper[4865]: E0123 12:59:32.146409 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=metallb-operator-controller-manager-7df9698d5d-lk94b_metallb-system(d1a0503d-3fc4-45b6-87c0-7af4a7246a4b)\"" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.157526 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2mrfh" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.165708 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.169802 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.362720 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/3.log" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.365590 4865 generic.go:334] "Generic (PLEG): container finished" podID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" containerID="2d16a5acbf212571adc93c0dc779c779aed1484718b55ebd8d904c177e523c24" exitCode=1 Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.365638 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" event={"ID":"da1cf187-8918-46b4-ab33-e8912c9d0dd6","Type":"ContainerDied","Data":"2d16a5acbf212571adc93c0dc779c779aed1484718b55ebd8d904c177e523c24"} Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.365697 4865 scope.go:117] "RemoveContainer" containerID="0c6031c52e32ce61747d89289bc49fa5f3b122eca73c4bd9f57be015ec527eb9" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.366380 4865 scope.go:117] "RemoveContainer" containerID="2d16a5acbf212571adc93c0dc779c779aed1484718b55ebd8d904c177e523c24" Jan 23 12:59:32 crc kubenswrapper[4865]: E0123 12:59:32.366637 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.372565 4865 scope.go:117] "RemoveContainer" containerID="c75b7b4a0ab86df22883e2a2e433ff69599920974bf67d9880538a7572f33d8f" Jan 23 12:59:32 crc kubenswrapper[4865]: E0123 12:59:32.372810 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=frr-k8s-webhook-server pod=frr-k8s-webhook-server-7df86c4f6c-dkvk4_metallb-system(4116044f-0cc3-41fb-9f26-536213e1dfa3)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.400198 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.413079 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.491881 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.508957 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.595070 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.618441 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.645626 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tlds9" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.655003 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.655047 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.655730 4865 scope.go:117] "RemoveContainer" containerID="a662702efe9e87379f485c54ea659eea964b9ab589f691c6f62222a3e3f9537a" Jan 23 12:59:32 crc kubenswrapper[4865]: E0123 12:59:32.655944 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=webhook-server pod=metallb-operator-webhook-server-78f5776895-s7hqg_metallb-system(9177b0d0-3ce7-40fe-8567-85cb8dd5227a)\"" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.697764 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.742472 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.742530 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.742814 4865 patch_prober.go:28] interesting pod/console-operator-58897d9998-8lsbn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.742875 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" podUID="cfe7c397-99ae-494d-a418-b0f08568f156" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.773826 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:32 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:32 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:32 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.773882 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:32 crc kubenswrapper[4865]: I0123 12:59:32.982399 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.001613 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.010969 4865 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.036246 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.042459 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.096820 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.119011 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.119108 4865 scope.go:117] "RemoveContainer" containerID="59d3de8a13e8835c1ed76b4149d3cd990bce2009bed5df37c5208d95bb6ad7ef" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.184044 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.189363 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.272888 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.362717 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.362786 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.363748 4865 scope.go:117] "RemoveContainer" containerID="c9af27a61d3dc1d7f7b2d575fbafd518ca1e36a8d58fe436f0ce27465d9bdc48" Jan 23 12:59:33 crc kubenswrapper[4865]: E0123 12:59:33.364196 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=controller pod=controller-6968d8fdc4-8bjkz_metallb-system(3685d2b2-151b-479a-92c1-ae400eacd1b9)\"" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.383806 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/4.log" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.384500 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/3.log" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.384569 4865 generic.go:334] "Generic (PLEG): container finished" podID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" containerID="8b34ebfddc106574cbd9dc578c5c1e78a7af49d50692df0213f0a801b0d40728" exitCode=1 Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.384640 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerDied","Data":"8b34ebfddc106574cbd9dc578c5c1e78a7af49d50692df0213f0a801b0d40728"} Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.384698 4865 scope.go:117] "RemoveContainer" containerID="faa9c6ad2fabad2d8ae1f9bd37836878ff75fb3663387a72bdc1ee93c863cd03" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.385485 4865 scope.go:117] "RemoveContainer" containerID="8b34ebfddc106574cbd9dc578c5c1e78a7af49d50692df0213f0a801b0d40728" Jan 23 12:59:33 crc kubenswrapper[4865]: E0123 12:59:33.385953 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.386475 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.422314 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jqmpp" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.469955 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.480284 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.648321 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.776383 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:33 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:33 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:33 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.776446 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.776721 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.776786 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.792957 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.793029 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.794099 4865 scope.go:117] "RemoveContainer" containerID="c75b7b4a0ab86df22883e2a2e433ff69599920974bf67d9880538a7572f33d8f" Jan 23 12:59:33 crc kubenswrapper[4865]: E0123 12:59:33.794335 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=frr-k8s-webhook-server pod=frr-k8s-webhook-server-7df86c4f6c-dkvk4_metallb-system(4116044f-0cc3-41fb-9f26-536213e1dfa3)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.883296 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.884296 4865 scope.go:117] "RemoveContainer" containerID="37863c5fef2db414e26d4bc0b4d8291a7aa8d9062cb97e690a84bea171062f6d" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.884497 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 12:59:33 crc kubenswrapper[4865]: E0123 12:59:33.884581 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.925307 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 12:59:33 crc kubenswrapper[4865]: I0123 12:59:33.948956 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.011129 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-nr4zd" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.117349 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.118029 4865 scope.go:117] "RemoveContainer" containerID="ad7b58da97aea8e81a4fc78a113fc65a47b6ef7c45356cb1e725a3ba71c07b61" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.118535 4865 scope.go:117] "RemoveContainer" containerID="d73a1b61243fadfa9f65fa4fdf2278989e76adf6f98cb19d257fa9e32b8d1db3" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.118683 4865 scope.go:117] "RemoveContainer" containerID="cdfd3ac5584103b22c2a79a31ed0095107b482a2f21d9168033a89df7eff77ee" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.180461 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.201412 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.342231 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.345508 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.411445 4865 generic.go:334] "Generic (PLEG): container finished" podID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" containerID="e8d46240db8ddaba7757c9918aece7afc99fd17fa4b6e7ff56182c39c3b3c320" exitCode=255 Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.411512 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerDied","Data":"e8d46240db8ddaba7757c9918aece7afc99fd17fa4b6e7ff56182c39c3b3c320"} Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.411544 4865 scope.go:117] "RemoveContainer" containerID="59d3de8a13e8835c1ed76b4149d3cd990bce2009bed5df37c5208d95bb6ad7ef" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.412465 4865 scope.go:117] "RemoveContainer" containerID="e8d46240db8ddaba7757c9918aece7afc99fd17fa4b6e7ff56182c39c3b3c320" Jan 23 12:59:34 crc kubenswrapper[4865]: E0123 12:59:34.413028 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=csi-provisioner pod=csi-hostpathplugin-g7l9x_hostpath-provisioner(f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb)\"" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.418408 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/4.log" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.421108 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.423502 4865 scope.go:117] "RemoveContainer" containerID="37863c5fef2db414e26d4bc0b4d8291a7aa8d9062cb97e690a84bea171062f6d" Jan 23 12:59:34 crc kubenswrapper[4865]: E0123 12:59:34.423750 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.491646 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.507517 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.507960 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.508167 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.516931 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.686167 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.745164 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.775186 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:34 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:34 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:34 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.775245 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.803872 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.803880 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.837022 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-wzpjd" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.848409 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.857886 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.868098 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.884095 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.884139 4865 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-xwjxp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.884192 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.884180 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" podUID="2699af1d-57a0-4ce2-9550-b423f9eafc0f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:34 crc kubenswrapper[4865]: I0123 12:59:34.892472 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.002773 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.015010 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.044011 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.118450 4865 scope.go:117] "RemoveContainer" containerID="6cf78a2b1d1d694c28eefbe3bd33e8e63f79645642989bd83eeeaa0c3233b15d" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.118883 4865 scope.go:117] "RemoveContainer" containerID="e8bba2b0d16880a680c2f1646d719f787c9086585a89aecce074435b389bda88" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.388384 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.433574 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" event={"ID":"6d4fbfc8-900e-4c44-a458-039d37a6dd40","Type":"ContainerStarted","Data":"7851990b7985c234bf995169fc54ba523c81a79f855cd468ec727349c938b02e"} Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.434842 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.436700 4865 generic.go:334] "Generic (PLEG): container finished" podID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" containerID="1e8084a377095353f6c0a51b0c1ba9381b21e0ff984cdc9e3e7ce846d91eaaac" exitCode=1 Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.436754 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" event={"ID":"bdf8f14b-af0d-43cc-b624-7dab2879dc4b","Type":"ContainerDied","Data":"1e8084a377095353f6c0a51b0c1ba9381b21e0ff984cdc9e3e7ce846d91eaaac"} Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.436782 4865 scope.go:117] "RemoveContainer" containerID="d73a1b61243fadfa9f65fa4fdf2278989e76adf6f98cb19d257fa9e32b8d1db3" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.437168 4865 scope.go:117] "RemoveContainer" containerID="1e8084a377095353f6c0a51b0c1ba9381b21e0ff984cdc9e3e7ce846d91eaaac" Jan 23 12:59:35 crc kubenswrapper[4865]: E0123 12:59:35.437411 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=cinder-operator-controller-manager-69cf5d4557-9jp5b_openstack-operators(bdf8f14b-af0d-43cc-b624-7dab2879dc4b)\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.440543 4865 generic.go:334] "Generic (PLEG): container finished" podID="15434cef-8cb6-4386-b761-143f1819cac8" containerID="893186b97fd348cbc78b5583c9dee4848834f0940caa5a09f42c8b91d8258985" exitCode=1 Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.440671 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" event={"ID":"15434cef-8cb6-4386-b761-143f1819cac8","Type":"ContainerDied","Data":"893186b97fd348cbc78b5583c9dee4848834f0940caa5a09f42c8b91d8258985"} Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.441068 4865 scope.go:117] "RemoveContainer" containerID="893186b97fd348cbc78b5583c9dee4848834f0940caa5a09f42c8b91d8258985" Jan 23 12:59:35 crc kubenswrapper[4865]: E0123 12:59:35.441325 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-cf98fcc89-7kqtt_cert-manager(15434cef-8cb6-4386-b761-143f1819cac8)\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" podUID="15434cef-8cb6-4386-b761-143f1819cac8" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.444951 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" event={"ID":"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73","Type":"ContainerStarted","Data":"b81b39a307d9441dfdf5f3dabb5507dd80e3fc704a9d4ee320d541a2a4b82254"} Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.445500 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.446907 4865 generic.go:334] "Generic (PLEG): container finished" podID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" containerID="46ffc542f334f599ae5569589549c116c3469a2659c4d68570d904f03567bbfb" exitCode=1 Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.446939 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" event={"ID":"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e","Type":"ContainerDied","Data":"46ffc542f334f599ae5569589549c116c3469a2659c4d68570d904f03567bbfb"} Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.447740 4865 scope.go:117] "RemoveContainer" containerID="46ffc542f334f599ae5569589549c116c3469a2659c4d68570d904f03567bbfb" Jan 23 12:59:35 crc kubenswrapper[4865]: E0123 12:59:35.448006 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.788294 4865 patch_prober.go:28] interesting pod/route-controller-manager-6497cbfbf6-fkmfr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.788756 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.805477 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.805707 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-ffmg5" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.805844 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.806012 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.806099 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.806251 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:35 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:35 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:35 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.806295 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.848386 4865 scope.go:117] "RemoveContainer" containerID="ad7b58da97aea8e81a4fc78a113fc65a47b6ef7c45356cb1e725a3ba71c07b61" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.943712 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-k2tck" Jan 23 12:59:35 crc kubenswrapper[4865]: I0123 12:59:35.979630 4865 scope.go:117] "RemoveContainer" containerID="cdfd3ac5584103b22c2a79a31ed0095107b482a2f21d9168033a89df7eff77ee" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.011299 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.033426 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.035278 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.042843 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.221783 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.223519 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bxlmh" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.225365 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.307110 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.337986 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.339039 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.340101 4865 scope.go:117] "RemoveContainer" containerID="2d16a5acbf212571adc93c0dc779c779aed1484718b55ebd8d904c177e523c24" Jan 23 12:59:36 crc kubenswrapper[4865]: E0123 12:59:36.340394 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.357655 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.374490 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.383334 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.383511 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.456070 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6497cbfbf6-fkmfr_60877fc9-78f8-4298-8104-8cd90e28d3bd/route-controller-manager/2.log" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.456584 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6497cbfbf6-fkmfr_60877fc9-78f8-4298-8104-8cd90e28d3bd/route-controller-manager/1.log" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.456682 4865 generic.go:334] "Generic (PLEG): container finished" podID="60877fc9-78f8-4298-8104-8cd90e28d3bd" containerID="9a4139dfc969c2097fac96a7889d206938f804f3ca29b28428ed6f6ee614103d" exitCode=255 Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.456741 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" event={"ID":"60877fc9-78f8-4298-8104-8cd90e28d3bd","Type":"ContainerDied","Data":"9a4139dfc969c2097fac96a7889d206938f804f3ca29b28428ed6f6ee614103d"} Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.456774 4865 scope.go:117] "RemoveContainer" containerID="531e111fafa26b694fc58ed92230d273fd823d8287a4b5fa1ee16877358fe461" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.457428 4865 scope.go:117] "RemoveContainer" containerID="9a4139dfc969c2097fac96a7889d206938f804f3ca29b28428ed6f6ee614103d" Jan 23 12:59:36 crc kubenswrapper[4865]: E0123 12:59:36.457672 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-6497cbfbf6-fkmfr_openshift-route-controller-manager(60877fc9-78f8-4298-8104-8cd90e28d3bd)\"" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.461302 4865 generic.go:334] "Generic (PLEG): container finished" podID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" containerID="b81b39a307d9441dfdf5f3dabb5507dd80e3fc704a9d4ee320d541a2a4b82254" exitCode=1 Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.461363 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" event={"ID":"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73","Type":"ContainerDied","Data":"b81b39a307d9441dfdf5f3dabb5507dd80e3fc704a9d4ee320d541a2a4b82254"} Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.461891 4865 scope.go:117] "RemoveContainer" containerID="b81b39a307d9441dfdf5f3dabb5507dd80e3fc704a9d4ee320d541a2a4b82254" Jan 23 12:59:36 crc kubenswrapper[4865]: E0123 12:59:36.462201 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.463536 4865 scope.go:117] "RemoveContainer" containerID="46ffc542f334f599ae5569589549c116c3469a2659c4d68570d904f03567bbfb" Jan 23 12:59:36 crc kubenswrapper[4865]: E0123 12:59:36.463875 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.469450 4865 generic.go:334] "Generic (PLEG): container finished" podID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" containerID="7851990b7985c234bf995169fc54ba523c81a79f855cd468ec727349c938b02e" exitCode=1 Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.469508 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" event={"ID":"6d4fbfc8-900e-4c44-a458-039d37a6dd40","Type":"ContainerDied","Data":"7851990b7985c234bf995169fc54ba523c81a79f855cd468ec727349c938b02e"} Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.470217 4865 scope.go:117] "RemoveContainer" containerID="7851990b7985c234bf995169fc54ba523c81a79f855cd468ec727349c938b02e" Jan 23 12:59:36 crc kubenswrapper[4865]: E0123 12:59:36.470448 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.473083 4865 scope.go:117] "RemoveContainer" containerID="1e8084a377095353f6c0a51b0c1ba9381b21e0ff984cdc9e3e7ce846d91eaaac" Jan 23 12:59:36 crc kubenswrapper[4865]: E0123 12:59:36.474425 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=cinder-operator-controller-manager-69cf5d4557-9jp5b_openstack-operators(bdf8f14b-af0d-43cc-b624-7dab2879dc4b)\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.478349 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lkz55" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.478678 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-fdj7x" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.479805 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.533637 4865 scope.go:117] "RemoveContainer" containerID="e8bba2b0d16880a680c2f1646d719f787c9086585a89aecce074435b389bda88" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.564176 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.604678 4865 scope.go:117] "RemoveContainer" containerID="6cf78a2b1d1d694c28eefbe3bd33e8e63f79645642989bd83eeeaa0c3233b15d" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.656237 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.672694 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.699660 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.762492 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.775267 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:36 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:36 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:36 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.775332 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.815000 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.821421 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.822077 4865 scope.go:117] "RemoveContainer" containerID="db5b8fe7b7736bdd3f92d1d332b37184bf21bfd7a6ea6dcadb3721bb397f71d3" Jan 23 12:59:36 crc kubenswrapper[4865]: E0123 12:59:36.822293 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=neutron-operator-controller-manager-5d8f59fb49-hnv8g_openstack-operators(429b62c2-b748-40b1-b00f-a1b0488fc5d0)\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.850116 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.866087 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.882977 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.957468 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 23 12:59:36 crc kubenswrapper[4865]: I0123 12:59:36.998662 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.000840 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.098564 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.118347 4865 scope.go:117] "RemoveContainer" containerID="98657247dc1c409a5ad6e3206fa2c1f831709130bf5f95d5042d9473c85fbecf" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.118906 4865 scope.go:117] "RemoveContainer" containerID="fa0a27d3b1895affcd782683e375b5e4c30ad4f54a335fd33201c9ac60a1485b" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.119078 4865 scope.go:117] "RemoveContainer" containerID="277f7fcd7eac07257ddfbc243d0a0fd950fbd5eaf54d5812499188c9ecf72e79" Jan 23 12:59:37 crc kubenswrapper[4865]: E0123 12:59:37.119315 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.177996 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.252103 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.288929 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.299547 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-8czh9" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.323747 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.330684 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.410476 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.431315 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.448849 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-w2d7j" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.488262 4865 generic.go:334] "Generic (PLEG): container finished" podID="9faffae5-73bb-4980-8092-b79a6888476d" containerID="c1fccdff35bce6869db28dae53682a3777098670187d98b7b6c30ee3e2b62d82" exitCode=1 Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.488325 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerDied","Data":"c1fccdff35bce6869db28dae53682a3777098670187d98b7b6c30ee3e2b62d82"} Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.488380 4865 scope.go:117] "RemoveContainer" containerID="1f97d13aa3ce86a1d2a02f51ffbd89b438cecc4f57a86a864f771252de8c9b3f" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.489514 4865 scope.go:117] "RemoveContainer" containerID="c1fccdff35bce6869db28dae53682a3777098670187d98b7b6c30ee3e2b62d82" Jan 23 12:59:37 crc kubenswrapper[4865]: E0123 12:59:37.490055 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller pod=frr-k8s-gh89m_metallb-system(9faffae5-73bb-4980-8092-b79a6888476d)\"" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.493714 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" event={"ID":"a9bb243e-e7c3-4f68-be35-d86fa049c570","Type":"ContainerStarted","Data":"2cde1245521e015d0aae40d1c823114ec04701b03e302c916e892d6708eb497c"} Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.494355 4865 scope.go:117] "RemoveContainer" containerID="2cde1245521e015d0aae40d1c823114ec04701b03e302c916e892d6708eb497c" Jan 23 12:59:37 crc kubenswrapper[4865]: E0123 12:59:37.494572 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.497169 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6497cbfbf6-fkmfr_60877fc9-78f8-4298-8104-8cd90e28d3bd/route-controller-manager/2.log" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.499164 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/3.log" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.499487 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerStarted","Data":"3912e2bb7e2743a31cd10155cf739bec788dcd478e4d809dd42b097119e32895"} Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.500357 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.505915 4865 generic.go:334] "Generic (PLEG): container finished" podID="3dee20a9-c14d-4a42-afb1-87d126996c56" containerID="7090bd800d91f637ccaa1100f5ceee8639300992d193dba3e886280899e7ce41" exitCode=1 Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.505993 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-szb9h" event={"ID":"3dee20a9-c14d-4a42-afb1-87d126996c56","Type":"ContainerDied","Data":"7090bd800d91f637ccaa1100f5ceee8639300992d193dba3e886280899e7ce41"} Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.506858 4865 scope.go:117] "RemoveContainer" containerID="7090bd800d91f637ccaa1100f5ceee8639300992d193dba3e886280899e7ce41" Jan 23 12:59:37 crc kubenswrapper[4865]: E0123 12:59:37.507209 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"speaker\" with CrashLoopBackOff: \"back-off 10s restarting failed container=speaker pod=speaker-szb9h_metallb-system(3dee20a9-c14d-4a42-afb1-87d126996c56)\"" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.511374 4865 scope.go:117] "RemoveContainer" containerID="b81b39a307d9441dfdf5f3dabb5507dd80e3fc704a9d4ee320d541a2a4b82254" Jan 23 12:59:37 crc kubenswrapper[4865]: E0123 12:59:37.511748 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.515805 4865 scope.go:117] "RemoveContainer" containerID="7851990b7985c234bf995169fc54ba523c81a79f855cd468ec727349c938b02e" Jan 23 12:59:37 crc kubenswrapper[4865]: E0123 12:59:37.516177 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.524325 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.526334 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.542682 4865 scope.go:117] "RemoveContainer" containerID="07df3a1af2b9bce0fd1cfcba17c0038c1597e26cafddd2b98e53aadb8fdae6e7" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.602321 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.638652 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.659847 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.666195 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.773940 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.776172 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:37 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:37 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:37 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.776217 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.817079 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 12:59:37 crc kubenswrapper[4865]: I0123 12:59:37.960980 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.017863 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.028412 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.080785 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.117966 4865 scope.go:117] "RemoveContainer" containerID="1c7f453a8aa0ae056bac8b7a278a7f7156ab8b37aff06ebff06217df84e970bb" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.213685 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.278442 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.303324 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.473655 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.528232 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-znx59_141f6171-3d39-421b-98f4-6accc5d30ae2/openshift-config-operator/3.log" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.528653 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" event={"ID":"141f6171-3d39-421b-98f4-6accc5d30ae2","Type":"ContainerStarted","Data":"ec4a58e58e55d8fbefe25f7842d0cb76ad425b55bb964954fda90c3082d72fe2"} Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.528971 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.537498 4865 generic.go:334] "Generic (PLEG): container finished" podID="a9bb243e-e7c3-4f68-be35-d86fa049c570" containerID="2cde1245521e015d0aae40d1c823114ec04701b03e302c916e892d6708eb497c" exitCode=1 Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.537569 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" event={"ID":"a9bb243e-e7c3-4f68-be35-d86fa049c570","Type":"ContainerDied","Data":"2cde1245521e015d0aae40d1c823114ec04701b03e302c916e892d6708eb497c"} Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.537619 4865 scope.go:117] "RemoveContainer" containerID="fa0a27d3b1895affcd782683e375b5e4c30ad4f54a335fd33201c9ac60a1485b" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.538209 4865 scope.go:117] "RemoveContainer" containerID="2cde1245521e015d0aae40d1c823114ec04701b03e302c916e892d6708eb497c" Jan 23 12:59:38 crc kubenswrapper[4865]: E0123 12:59:38.538408 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.541938 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/4.log" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.543079 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/3.log" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.543507 4865 generic.go:334] "Generic (PLEG): container finished" podID="2c1ba660-8691-49e2-b0cc-056355d82f4c" containerID="3912e2bb7e2743a31cd10155cf739bec788dcd478e4d809dd42b097119e32895" exitCode=1 Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.543557 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerDied","Data":"3912e2bb7e2743a31cd10155cf739bec788dcd478e4d809dd42b097119e32895"} Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.543923 4865 scope.go:117] "RemoveContainer" containerID="3912e2bb7e2743a31cd10155cf739bec788dcd478e4d809dd42b097119e32895" Jan 23 12:59:38 crc kubenswrapper[4865]: E0123 12:59:38.544124 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.601057 4865 scope.go:117] "RemoveContainer" containerID="98657247dc1c409a5ad6e3206fa2c1f831709130bf5f95d5042d9473c85fbecf" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.602585 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.644036 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.685911 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.686716 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.686729 4865 scope.go:117] "RemoveContainer" containerID="8b34ebfddc106574cbd9dc578c5c1e78a7af49d50692df0213f0a801b0d40728" Jan 23 12:59:38 crc kubenswrapper[4865]: E0123 12:59:38.687156 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.774928 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:38 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:38 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:38 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.775202 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.778994 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 23 12:59:38 crc kubenswrapper[4865]: I0123 12:59:38.949140 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.003381 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.003434 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.027029 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.099531 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.118486 4865 scope.go:117] "RemoveContainer" containerID="86ef6bcfe1b3263f30677005188e0a48259b6d6dcdb05efcd30f92f3a527c545" Jan 23 12:59:39 crc kubenswrapper[4865]: E0123 12:59:39.119554 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.136998 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.162709 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.163843 4865 scope.go:117] "RemoveContainer" containerID="5ee6b1222e9411f014bd61cf294c67ddc555f8ce71c7f5164add3cbb62c2ca98" Jan 23 12:59:39 crc kubenswrapper[4865]: E0123 12:59:39.164205 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=openstack-operator-controller-manager-76c5c47f8f-p49qh_openstack-operators(b2ea2452-dc3b-4b93-a9d4-e562a63111c9)\"" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.206394 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.217664 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.275660 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.396929 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.460103 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.483712 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.547491 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.559615 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/4.log" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.560713 4865 scope.go:117] "RemoveContainer" containerID="8b34ebfddc106574cbd9dc578c5c1e78a7af49d50692df0213f0a801b0d40728" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.560844 4865 scope.go:117] "RemoveContainer" containerID="3912e2bb7e2743a31cd10155cf739bec788dcd478e4d809dd42b097119e32895" Jan 23 12:59:39 crc kubenswrapper[4865]: E0123 12:59:39.561165 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:59:39 crc kubenswrapper[4865]: E0123 12:59:39.561140 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.622403 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.774568 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:39 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:39 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:39 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.774982 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:39 crc kubenswrapper[4865]: I0123 12:59:39.835058 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.109409 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.118576 4865 scope.go:117] "RemoveContainer" containerID="486a01a99c4c9f08efb4c22aa2be31250963090a89b5e968702a63403a9f6476" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.118653 4865 scope.go:117] "RemoveContainer" containerID="302afc1aa2f5954d846f9b947677f513aa4903bdf8789eb58d4cbe0e85645cff" Jan 23 12:59:40 crc kubenswrapper[4865]: E0123 12:59:40.118851 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166)\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:59:40 crc kubenswrapper[4865]: E0123 12:59:40.118867 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da)\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.119480 4865 scope.go:117] "RemoveContainer" containerID="bd042d4838673f9732c4e8c413fa92e5a1c5e88525bd5aeef263d3b1e9d83000" Jan 23 12:59:40 crc kubenswrapper[4865]: E0123 12:59:40.119670 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=designate-operator-controller-manager-b45d7bf98-4c94z_openstack-operators(10627175-8e39-4799-bec7-c0b49b938a29)\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.119829 4865 scope.go:117] "RemoveContainer" containerID="b8fc2540ac6be036b476d713f6f32c61384132a04b49c6695b63e036120ddd4b" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.120105 4865 scope.go:117] "RemoveContainer" containerID="372633d6c1629ea3c133a035a42459ad9888e2416153cf6dc5a143ce8246eb5e" Jan 23 12:59:40 crc kubenswrapper[4865]: E0123 12:59:40.120194 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.122093 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.123297 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.409779 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.433147 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.449658 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.489714 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.489779 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.592738 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.631116 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.718452 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.774638 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:40 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:40 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:40 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.774697 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.817037 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.824728 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.824941 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.904311 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.920241 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 12:59:40 crc kubenswrapper[4865]: I0123 12:59:40.928226 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.011201 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.011691 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.118140 4865 scope.go:117] "RemoveContainer" containerID="5031a9000ba13266ff563f20ed5d1051c6903c52bb120dfe5b49a8077462e6f4" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.118247 4865 scope.go:117] "RemoveContainer" containerID="4b92021aa738d8ce5a77d8b00baf08a60d2b581b8a7c210f45e831e89d21b25d" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.118271 4865 scope.go:117] "RemoveContainer" containerID="dac04c172b4cbd023a80707128cb43258f9d6203fd72be3fda5736dc24798b27" Jan 23 12:59:41 crc kubenswrapper[4865]: E0123 12:59:41.118418 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-fdkt9_openstack-operators(8e227974-40b8-4d16-8d5f-961b705a9740)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" podUID="8e227974-40b8-4d16-8d5f-961b705a9740" Jan 23 12:59:41 crc kubenswrapper[4865]: E0123 12:59:41.118427 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:59:41 crc kubenswrapper[4865]: E0123 12:59:41.118442 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.118504 4865 scope.go:117] "RemoveContainer" containerID="820b838e41932b51e492781e46a62aba87e5bd3fe1ad917198cfd13ce68996af" Jan 23 12:59:41 crc kubenswrapper[4865]: E0123 12:59:41.118765 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=barbican-operator-controller-manager-59dd8b7cbf-nppmq_openstack-operators(5fb13a32-67c3-46b1-a0b8-573e941e6c7e)\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.119182 4865 scope.go:117] "RemoveContainer" containerID="4bd26460c7107d596f6e82281722c13d41309f2acaeef6043fde99411265ea62" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.119292 4865 scope.go:117] "RemoveContainer" containerID="38c3a6ccccdf9a9b276e9ee0f9aa09c7c93bfb6ad347dc4b24a986fb1d05d602" Jan 23 12:59:41 crc kubenswrapper[4865]: E0123 12:59:41.119528 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba)\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.143260 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.244529 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.266495 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.320831 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.322105 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8ft6r" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.322186 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.366012 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.383102 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.480564 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.483912 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.578557 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-42cdm_843c383b-053f-42f5-88ce-7a216f5354a3/catalog-operator/3.log" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.578702 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" event={"ID":"843c383b-053f-42f5-88ce-7a216f5354a3","Type":"ContainerStarted","Data":"63ce31354c5e175f286a3fce331f4acd38297ea2ab01ec1f2e00e8a7899faa96"} Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.579434 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.587498 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-42cdm" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.718216 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.745675 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8lsbn" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.757948 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.777555 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:41 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:41 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:41 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:41 crc kubenswrapper[4865]: I0123 12:59:41.777625 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.009991 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.010701 4865 scope.go:117] "RemoveContainer" containerID="4b3f708fd26a8e6b4ad737d720f4693da3a7df553a47fa2c49c9d37673a2b97b" Jan 23 12:59:42 crc kubenswrapper[4865]: E0123 12:59:42.010909 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-l6w6d_openstack-operators(2c3366d9-565f-4601-acbb-b473dcfe126c)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.088817 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.108206 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.118674 4865 scope.go:117] "RemoveContainer" containerID="ecc11a6b708e2ecc2bab9400c87b0ae20de964fb884bde7f72ea4d00a7271726" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.118927 4865 scope.go:117] "RemoveContainer" containerID="ec509e0de18f0a1eb53b6c974db80fba7ecc65f2c1424ad1321e243121b7162d" Jan 23 12:59:42 crc kubenswrapper[4865]: E0123 12:59:42.119161 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=telemetry-operator-controller-manager-85cd9769bb-kkkcn_openstack-operators(dbfec6f5-80b4-480f-a958-c3107b2776c0)\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" podUID="dbfec6f5-80b4-480f-a958-c3107b2776c0" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.140967 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.152962 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qjk5k" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.162911 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.274281 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.287518 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.332421 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ccls2" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.354003 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.359138 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.401936 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.443360 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-q6psp" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.515995 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.557262 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-znx59" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.595360 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-7489ccbc46-6gcbp_a51b0d26-bdc8-433f-90e5-d90b9bd94373/oauth-openshift/3.log" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.597665 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.602131 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" event={"ID":"a51b0d26-bdc8-433f-90e5-d90b9bd94373","Type":"ContainerStarted","Data":"24ba330b95cae83a5f59bfe618bb93cb64433dfed576b96d012873388b064d91"} Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.603003 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.675779 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.718473 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.727889 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.743522 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.773827 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:42 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:42 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:42 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.774185 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.821481 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.947992 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-nhqgr" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.948574 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7489ccbc46-6gcbp" Jan 23 12:59:42 crc kubenswrapper[4865]: I0123 12:59:42.977540 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.023765 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.035476 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.073843 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.108449 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.190040 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.213439 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.301567 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.301675 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.302497 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"c74f0d480b376c78be8098a723dd25f8263d240a2395a6275f1e9ce7a869a41f"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed startup probe, will be restarted" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.302565 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" containerID="cri-o://c74f0d480b376c78be8098a723dd25f8263d240a2395a6275f1e9ce7a869a41f" gracePeriod=30 Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.344559 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-kmcjf" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.351901 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.406359 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.414379 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.416161 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.425122 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.534791 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dzq9p" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.574310 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.593339 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.634232 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-g7gf5" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.654250 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.762281 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.773035 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.776331 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.777234 4865 scope.go:117] "RemoveContainer" containerID="c1fccdff35bce6869db28dae53682a3777098670187d98b7b6c30ee3e2b62d82" Jan 23 12:59:43 crc kubenswrapper[4865]: E0123 12:59:43.777522 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller pod=frr-k8s-gh89m_metallb-system(9faffae5-73bb-4980-8092-b79a6888476d)\"" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.777693 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:43 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:43 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:43 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.777715 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:43 crc kubenswrapper[4865]: I0123 12:59:43.829958 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gqlj9" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.012235 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-xwjxp" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.022985 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.036202 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.118852 4865 scope.go:117] "RemoveContainer" containerID="5a717a28c3ab6313b6531cf15eb4cb0ab0c1f50459f10a50e3c41ba8d6a77900" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.119138 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=metallb-operator-controller-manager-7df9698d5d-lk94b_metallb-system(d1a0503d-3fc4-45b6-87c0-7af4a7246a4b)\"" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.120173 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.120404 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.141028 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.166189 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.166987 4865 scope.go:117] "RemoveContainer" containerID="3912e2bb7e2743a31cd10155cf739bec788dcd478e4d809dd42b097119e32895" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.167191 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.229791 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.331301 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.361327 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-p5xpv" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.440964 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5l6wg" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.457506 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.511722 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.512462 4865 scope.go:117] "RemoveContainer" containerID="9a4139dfc969c2097fac96a7889d206938f804f3ca29b28428ed6f6ee614103d" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.512741 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"route-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=route-controller-manager pod=route-controller-manager-6497cbfbf6-fkmfr_openshift-route-controller-manager(60877fc9-78f8-4298-8104-8cd90e28d3bd)\"" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" podUID="60877fc9-78f8-4298-8104-8cd90e28d3bd" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.613295 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4cb0a89a-49f9-4a31-9cec-669e88882018","Type":"ContainerStarted","Data":"43f210a26c030d1e3cfa9a928ba2f5079ae8efcefa591af06cb00d543d804e24"} Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.613641 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.686489 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tcx8b"] Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.687713 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerName="extract-utilities" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.687746 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerName="extract-utilities" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.687804 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="extract-content" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.687811 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="extract-content" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.687840 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="extract-utilities" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.687846 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="extract-utilities" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.687863 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="registry-server" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.687871 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="registry-server" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.687883 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerName="extract-content" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.687889 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerName="extract-content" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.687904 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerName="registry-server" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.687910 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerName="registry-server" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.687919 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" containerName="tempest-tests-tempest-tests-runner" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.687925 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" containerName="tempest-tests-tempest-tests-runner" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.688101 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="6083e716-8bbf-40bf-abdd-87e865a2f7ae" containerName="tempest-tests-tempest-tests-runner" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.688124 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="f931330b-23a2-4304-b53f-0fd2a2fd53cb" containerName="registry-server" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.688131 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf60cc3-36cd-449c-a995-85e3539d9014" containerName="registry-server" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.690499 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.709745 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tcx8b"] Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.774641 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:44 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:44 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:44 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.774697 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.786281 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4xfx\" (UniqueName: \"kubernetes.io/projected/54370b2c-cc27-4800-ad14-d61df7b4c73d-kube-api-access-g4xfx\") pod \"certified-operators-tcx8b\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.786390 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-utilities\") pod \"certified-operators-tcx8b\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.786414 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-catalog-content\") pod \"certified-operators-tcx8b\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.802309 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-szb9h" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.803183 4865 scope.go:117] "RemoveContainer" containerID="7090bd800d91f637ccaa1100f5ceee8639300992d193dba3e886280899e7ce41" Jan 23 12:59:44 crc kubenswrapper[4865]: E0123 12:59:44.803504 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"speaker\" with CrashLoopBackOff: \"back-off 10s restarting failed container=speaker pod=speaker-szb9h_metallb-system(3dee20a9-c14d-4a42-afb1-87d126996c56)\"" pod="metallb-system/speaker-szb9h" podUID="3dee20a9-c14d-4a42-afb1-87d126996c56" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.804715 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.888280 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4xfx\" (UniqueName: \"kubernetes.io/projected/54370b2c-cc27-4800-ad14-d61df7b4c73d-kube-api-access-g4xfx\") pod \"certified-operators-tcx8b\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.888405 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-utilities\") pod \"certified-operators-tcx8b\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.888441 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-catalog-content\") pod \"certified-operators-tcx8b\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.888935 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-catalog-content\") pod \"certified-operators-tcx8b\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.888946 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-utilities\") pod \"certified-operators-tcx8b\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.919380 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4xfx\" (UniqueName: \"kubernetes.io/projected/54370b2c-cc27-4800-ad14-d61df7b4c73d-kube-api-access-g4xfx\") pod \"certified-operators-tcx8b\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:44 crc kubenswrapper[4865]: I0123 12:59:44.987432 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-f86ht" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.007459 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.074047 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.108664 4865 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.117965 4865 scope.go:117] "RemoveContainer" containerID="37863c5fef2db414e26d4bc0b4d8291a7aa8d9062cb97e690a84bea171062f6d" Jan 23 12:59:45 crc kubenswrapper[4865]: E0123 12:59:45.118428 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.218362 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rxmfb"] Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.221845 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.250186 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.255684 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxmfb"] Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.297504 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-utilities\") pod \"community-operators-rxmfb\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.297566 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsrd2\" (UniqueName: \"kubernetes.io/projected/518bbe98-a0a6-4693-babd-dcd94b8897c6-kube-api-access-vsrd2\") pod \"community-operators-rxmfb\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.297614 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-catalog-content\") pod \"community-operators-rxmfb\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.302565 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.335114 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.412739 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-utilities\") pod \"community-operators-rxmfb\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.412807 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsrd2\" (UniqueName: \"kubernetes.io/projected/518bbe98-a0a6-4693-babd-dcd94b8897c6-kube-api-access-vsrd2\") pod \"community-operators-rxmfb\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.412841 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-catalog-content\") pod \"community-operators-rxmfb\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.413556 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-catalog-content\") pod \"community-operators-rxmfb\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.413820 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-utilities\") pod \"community-operators-rxmfb\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.440532 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsrd2\" (UniqueName: \"kubernetes.io/projected/518bbe98-a0a6-4693-babd-dcd94b8897c6-kube-api-access-vsrd2\") pod \"community-operators-rxmfb\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.539193 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmfb" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.565088 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zjbk7" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.577681 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.591263 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tcx8b"] Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.715960 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcx8b" event={"ID":"54370b2c-cc27-4800-ad14-d61df7b4c73d","Type":"ContainerStarted","Data":"e293099ad32e8d9c4235bc2d2945d450d7393de78dc9ccd310764f9dccaf8466"} Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.716299 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.724060 4865 generic.go:334] "Generic (PLEG): container finished" podID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerID="d4e6d818c5e068d51936524f311b0a4ea0a416bc4ab3fabeff119dbfad8a049e" exitCode=137 Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.724141 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerDied","Data":"d4e6d818c5e068d51936524f311b0a4ea0a416bc4ab3fabeff119dbfad8a049e"} Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.724206 4865 scope.go:117] "RemoveContainer" containerID="5742c7fb47488dc050a829f7c69eb88fe730402225438befdb4f7b95a364495a" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.729125 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.787202 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:45 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:45 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:45 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.787266 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:45 crc kubenswrapper[4865]: I0123 12:59:45.919328 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxmfb"] Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.064240 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.074962 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.117977 4865 scope.go:117] "RemoveContainer" containerID="e8d46240db8ddaba7757c9918aece7afc99fd17fa4b6e7ff56182c39c3b3c320" Jan 23 12:59:46 crc kubenswrapper[4865]: E0123 12:59:46.118262 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=csi-provisioner pod=csi-hostpathplugin-g7l9x_hostpath-provisioner(f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb)\"" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.195793 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.227686 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.228386 4865 scope.go:117] "RemoveContainer" containerID="1e8084a377095353f6c0a51b0c1ba9381b21e0ff984cdc9e3e7ce846d91eaaac" Jan 23 12:59:46 crc kubenswrapper[4865]: E0123 12:59:46.228624 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=cinder-operator-controller-manager-69cf5d4557-9jp5b_openstack-operators(bdf8f14b-af0d-43cc-b624-7dab2879dc4b)\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.308004 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.340382 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.340853 4865 scope.go:117] "RemoveContainer" containerID="2d16a5acbf212571adc93c0dc779c779aed1484718b55ebd8d904c177e523c24" Jan 23 12:59:46 crc kubenswrapper[4865]: E0123 12:59:46.341074 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.384887 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.385505 4865 scope.go:117] "RemoveContainer" containerID="46ffc542f334f599ae5569589549c116c3469a2659c4d68570d904f03567bbfb" Jan 23 12:59:46 crc kubenswrapper[4865]: E0123 12:59:46.385717 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.394176 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.401590 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.733034 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmfb" event={"ID":"518bbe98-a0a6-4693-babd-dcd94b8897c6","Type":"ContainerStarted","Data":"ae4f6822454d8763fab15d3c29d55207273230b523fba6a330db877f7de8fb40"} Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.735255 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerStarted","Data":"30d3be517958f944291cf17e27e1c020327e5e7efe24145c62c6dbbaacd35043"} Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.737410 4865 generic.go:334] "Generic (PLEG): container finished" podID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerID="2e611903b6440fec75ec8cd4cc97040cb502aad45f91320971e5e02966249d6d" exitCode=0 Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.737451 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcx8b" event={"ID":"54370b2c-cc27-4800-ad14-d61df7b4c73d","Type":"ContainerDied","Data":"2e611903b6440fec75ec8cd4cc97040cb502aad45f91320971e5e02966249d6d"} Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.774035 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:46 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:46 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:46 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.774363 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.778422 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.779120 4865 scope.go:117] "RemoveContainer" containerID="2cde1245521e015d0aae40d1c823114ec04701b03e302c916e892d6708eb497c" Jan 23 12:59:46 crc kubenswrapper[4865]: E0123 12:59:46.779344 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.779736 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.780454 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.822745 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.823420 4865 scope.go:117] "RemoveContainer" containerID="db5b8fe7b7736bdd3f92d1d332b37184bf21bfd7a6ea6dcadb3721bb397f71d3" Jan 23 12:59:46 crc kubenswrapper[4865]: E0123 12:59:46.823855 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=neutron-operator-controller-manager-5d8f59fb49-hnv8g_openstack-operators(429b62c2-b748-40b1-b00f-a1b0488fc5d0)\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.827838 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.866010 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.866669 4865 scope.go:117] "RemoveContainer" containerID="7851990b7985c234bf995169fc54ba523c81a79f855cd468ec727349c938b02e" Jan 23 12:59:46 crc kubenswrapper[4865]: E0123 12:59:46.866875 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.940800 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.941511 4865 scope.go:117] "RemoveContainer" containerID="b81b39a307d9441dfdf5f3dabb5507dd80e3fc704a9d4ee320d541a2a4b82254" Jan 23 12:59:46 crc kubenswrapper[4865]: E0123 12:59:46.941737 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:59:46 crc kubenswrapper[4865]: I0123 12:59:46.961010 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.085489 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.118439 4865 scope.go:117] "RemoveContainer" containerID="c75b7b4a0ab86df22883e2a2e433ff69599920974bf67d9880538a7572f33d8f" Jan 23 12:59:47 crc kubenswrapper[4865]: E0123 12:59:47.118647 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=frr-k8s-webhook-server pod=frr-k8s-webhook-server-7df86c4f6c-dkvk4_metallb-system(4116044f-0cc3-41fb-9f26-536213e1dfa3)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.118671 4865 scope.go:117] "RemoveContainer" containerID="893186b97fd348cbc78b5583c9dee4848834f0940caa5a09f42c8b91d8258985" Jan 23 12:59:47 crc kubenswrapper[4865]: E0123 12:59:47.118965 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-cf98fcc89-7kqtt_cert-manager(15434cef-8cb6-4386-b761-143f1819cac8)\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" podUID="15434cef-8cb6-4386-b761-143f1819cac8" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.124249 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.238474 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7ncwt" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.318845 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.320369 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.514917 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.561815 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.611893 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9nlns" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.612136 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.748218 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcx8b" event={"ID":"54370b2c-cc27-4800-ad14-d61df7b4c73d","Type":"ContainerStarted","Data":"9c9f53fb5b751f1033af191143537af0ff8bd25bc8516d1312401fa61f5f4d17"} Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.749822 4865 generic.go:334] "Generic (PLEG): container finished" podID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerID="ab688b1ced078dc892d998b90385fa0fe764395f3fcdc0aa0a01e8a1899d14bd" exitCode=0 Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.749886 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmfb" event={"ID":"518bbe98-a0a6-4693-babd-dcd94b8897c6","Type":"ContainerDied","Data":"ab688b1ced078dc892d998b90385fa0fe764395f3fcdc0aa0a01e8a1899d14bd"} Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.750461 4865 scope.go:117] "RemoveContainer" containerID="2cde1245521e015d0aae40d1c823114ec04701b03e302c916e892d6708eb497c" Jan 23 12:59:47 crc kubenswrapper[4865]: E0123 12:59:47.750689 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.773938 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:47 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:47 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:47 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.774000 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:47 crc kubenswrapper[4865]: I0123 12:59:47.872515 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.118787 4865 scope.go:117] "RemoveContainer" containerID="c9af27a61d3dc1d7f7b2d575fbafd518ca1e36a8d58fe436f0ce27465d9bdc48" Jan 23 12:59:48 crc kubenswrapper[4865]: E0123 12:59:48.119129 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=controller pod=controller-6968d8fdc4-8bjkz_metallb-system(3685d2b2-151b-479a-92c1-ae400eacd1b9)\"" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.119511 4865 scope.go:117] "RemoveContainer" containerID="a662702efe9e87379f485c54ea659eea964b9ab589f691c6f62222a3e3f9537a" Jan 23 12:59:48 crc kubenswrapper[4865]: E0123 12:59:48.119913 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=webhook-server pod=metallb-operator-webhook-server-78f5776895-s7hqg_metallb-system(9177b0d0-3ce7-40fe-8567-85cb8dd5227a)\"" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.137362 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.252463 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.290723 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.308684 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.351654 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.735710 4865 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6x58s" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.774417 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:48 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:48 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:48 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.774464 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.842091 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 12:59:48 crc kubenswrapper[4865]: I0123 12:59:48.946573 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 12:59:49 crc kubenswrapper[4865]: E0123 12:59:49.223196 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54370b2c_cc27_4800_ad14_d61df7b4c73d.slice/crio-conmon-9c9f53fb5b751f1033af191143537af0ff8bd25bc8516d1312401fa61f5f4d17.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54370b2c_cc27_4800_ad14_d61df7b4c73d.slice/crio-9c9f53fb5b751f1033af191143537af0ff8bd25bc8516d1312401fa61f5f4d17.scope\": RecentStats: unable to find data in memory cache]" Jan 23 12:59:49 crc kubenswrapper[4865]: I0123 12:59:49.765987 4865 generic.go:334] "Generic (PLEG): container finished" podID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerID="9c9f53fb5b751f1033af191143537af0ff8bd25bc8516d1312401fa61f5f4d17" exitCode=0 Jan 23 12:59:49 crc kubenswrapper[4865]: I0123 12:59:49.766324 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcx8b" event={"ID":"54370b2c-cc27-4800-ad14-d61df7b4c73d","Type":"ContainerDied","Data":"9c9f53fb5b751f1033af191143537af0ff8bd25bc8516d1312401fa61f5f4d17"} Jan 23 12:59:49 crc kubenswrapper[4865]: I0123 12:59:49.777015 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:49 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:49 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:49 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:49 crc kubenswrapper[4865]: I0123 12:59:49.777071 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:49 crc kubenswrapper[4865]: I0123 12:59:49.917950 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 12:59:50 crc kubenswrapper[4865]: I0123 12:59:50.122485 4865 scope.go:117] "RemoveContainer" containerID="86ef6bcfe1b3263f30677005188e0a48259b6d6dcdb05efcd30f92f3a527c545" Jan 23 12:59:50 crc kubenswrapper[4865]: E0123 12:59:50.122749 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=swift-operator-controller-manager-547cbdb99f-zm52l_openstack-operators(661fbfd2-7d52-419a-943f-c57854d2306b)\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" podUID="661fbfd2-7d52-419a-943f-c57854d2306b" Jan 23 12:59:50 crc kubenswrapper[4865]: I0123 12:59:50.123032 4865 scope.go:117] "RemoveContainer" containerID="277f7fcd7eac07257ddfbc243d0a0fd950fbd5eaf54d5812499188c9ecf72e79" Jan 23 12:59:50 crc kubenswrapper[4865]: E0123 12:59:50.123222 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ovn-operator-controller-manager-55db956ddc-cbz92_openstack-operators(93194445-a021-4960-ab82-085f13cc959d)\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" podUID="93194445-a021-4960-ab82-085f13cc959d" Jan 23 12:59:50 crc kubenswrapper[4865]: I0123 12:59:50.277898 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 12:59:50 crc kubenswrapper[4865]: I0123 12:59:50.488503 4865 patch_prober.go:28] interesting pod/console-5d7d54b946-29gbz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 12:59:50 crc kubenswrapper[4865]: I0123 12:59:50.488844 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5d7d54b946-29gbz" podUID="9e2332f2-6e3b-4355-9af1-24a8980c7d8a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.39:8443/health\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 12:59:50 crc kubenswrapper[4865]: I0123 12:59:50.776987 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:50 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:50 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:50 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:50 crc kubenswrapper[4865]: I0123 12:59:50.777060 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:50 crc kubenswrapper[4865]: I0123 12:59:50.789849 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 12:59:50 crc kubenswrapper[4865]: I0123 12:59:50.976549 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 23 12:59:51 crc kubenswrapper[4865]: I0123 12:59:51.149991 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 12:59:51 crc kubenswrapper[4865]: I0123 12:59:51.384300 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:51 crc kubenswrapper[4865]: I0123 12:59:51.774501 4865 patch_prober.go:28] interesting pod/router-default-5444994796-swk7h container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 12:59:51 crc kubenswrapper[4865]: [-]has-synced failed: reason withheld Jan 23 12:59:51 crc kubenswrapper[4865]: [+]process-running ok Jan 23 12:59:51 crc kubenswrapper[4865]: healthz check failed Jan 23 12:59:51 crc kubenswrapper[4865]: I0123 12:59:51.774589 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-swk7h" podUID="3fbcdfcf-19cc-46b9-a986-bd9426751459" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:52 crc kubenswrapper[4865]: I0123 12:59:52.118957 4865 scope.go:117] "RemoveContainer" containerID="b8fc2540ac6be036b476d713f6f32c61384132a04b49c6695b63e036120ddd4b" Jan 23 12:59:52 crc kubenswrapper[4865]: E0123 12:59:52.119506 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=ironic-operator-controller-manager-69d6c9f5b8-h6dkp_openstack-operators(967c3782-1bce-4145-8244-7650fe19dc22)\"" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" podUID="967c3782-1bce-4145-8244-7650fe19dc22" Jan 23 12:59:52 crc kubenswrapper[4865]: I0123 12:59:52.195108 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 12:59:52 crc kubenswrapper[4865]: I0123 12:59:52.774471 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 12:59:52 crc kubenswrapper[4865]: I0123 12:59:52.778168 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-swk7h" Jan 23 12:59:52 crc kubenswrapper[4865]: I0123 12:59:52.800288 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmfb" event={"ID":"518bbe98-a0a6-4693-babd-dcd94b8897c6","Type":"ContainerStarted","Data":"61373ff7c1a57ae794504d536130891cb4bca5ce8cb51a557f18b31b3d638fb1"} Jan 23 12:59:53 crc kubenswrapper[4865]: I0123 12:59:53.118422 4865 scope.go:117] "RemoveContainer" containerID="38c3a6ccccdf9a9b276e9ee0f9aa09c7c93bfb6ad347dc4b24a986fb1d05d602" Jan 23 12:59:53 crc kubenswrapper[4865]: E0123 12:59:53.118624 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=watcher-operator-controller-manager-5ffb9c6597-7mv2d_openstack-operators(8ef0fdaa-8086-467d-8106-5c6dec532dba)\"" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" podUID="8ef0fdaa-8086-467d-8106-5c6dec532dba" Jan 23 12:59:53 crc kubenswrapper[4865]: I0123 12:59:53.119006 4865 scope.go:117] "RemoveContainer" containerID="ec509e0de18f0a1eb53b6c974db80fba7ecc65f2c1424ad1321e243121b7162d" Jan 23 12:59:53 crc kubenswrapper[4865]: I0123 12:59:53.119550 4865 scope.go:117] "RemoveContainer" containerID="bd042d4838673f9732c4e8c413fa92e5a1c5e88525bd5aeef263d3b1e9d83000" Jan 23 12:59:53 crc kubenswrapper[4865]: E0123 12:59:53.119724 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=designate-operator-controller-manager-b45d7bf98-4c94z_openstack-operators(10627175-8e39-4799-bec7-c0b49b938a29)\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" podUID="10627175-8e39-4799-bec7-c0b49b938a29" Jan 23 12:59:53 crc kubenswrapper[4865]: I0123 12:59:53.857645 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 12:59:54 crc kubenswrapper[4865]: I0123 12:59:54.122801 4865 scope.go:117] "RemoveContainer" containerID="dac04c172b4cbd023a80707128cb43258f9d6203fd72be3fda5736dc24798b27" Jan 23 12:59:54 crc kubenswrapper[4865]: E0123 12:59:54.123057 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=nova-operator-controller-manager-6b8bc8d87d-6t8ts_openstack-operators(1959a742-ade2-4266-9a93-e96a1b6e3908)\"" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" podUID="1959a742-ade2-4266-9a93-e96a1b6e3908" Jan 23 12:59:54 crc kubenswrapper[4865]: I0123 12:59:54.123390 4865 scope.go:117] "RemoveContainer" containerID="486a01a99c4c9f08efb4c22aa2be31250963090a89b5e968702a63403a9f6476" Jan 23 12:59:54 crc kubenswrapper[4865]: E0123 12:59:54.123652 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=keystone-operator-controller-manager-b8b6d4659-9fl7w_openstack-operators(e92ddc14-bdb6-4407-b8a3-047079030166)\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" Jan 23 12:59:54 crc kubenswrapper[4865]: I0123 12:59:54.124999 4865 scope.go:117] "RemoveContainer" containerID="8b34ebfddc106574cbd9dc578c5c1e78a7af49d50692df0213f0a801b0d40728" Jan 23 12:59:54 crc kubenswrapper[4865]: E0123 12:59:54.125199 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 12:59:54 crc kubenswrapper[4865]: I0123 12:59:54.125960 4865 scope.go:117] "RemoveContainer" containerID="5ee6b1222e9411f014bd61cf294c67ddc555f8ce71c7f5164add3cbb62c2ca98" Jan 23 12:59:54 crc kubenswrapper[4865]: E0123 12:59:54.126344 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=openstack-operator-controller-manager-76c5c47f8f-p49qh_openstack-operators(b2ea2452-dc3b-4b93-a9d4-e562a63111c9)\"" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" Jan 23 12:59:54 crc kubenswrapper[4865]: I0123 12:59:54.355505 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:59:54 crc kubenswrapper[4865]: I0123 12:59:54.355909 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 12:59:54 crc kubenswrapper[4865]: I0123 12:59:54.894334 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.118781 4865 scope.go:117] "RemoveContainer" containerID="302afc1aa2f5954d846f9b947677f513aa4903bdf8789eb58d4cbe0e85645cff" Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.118953 4865 scope.go:117] "RemoveContainer" containerID="820b838e41932b51e492781e46a62aba87e5bd3fe1ad917198cfd13ce68996af" Jan 23 12:59:55 crc kubenswrapper[4865]: E0123 12:59:55.119017 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=heat-operator-controller-manager-594c8c9d5d-fsch6_openstack-operators(0167f850-ba43-426a-8c56-aa171131e7da)\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" podUID="0167f850-ba43-426a-8c56-aa171131e7da" Jan 23 12:59:55 crc kubenswrapper[4865]: E0123 12:59:55.119178 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=barbican-operator-controller-manager-59dd8b7cbf-nppmq_openstack-operators(5fb13a32-67c3-46b1-a0b8-573e941e6c7e)\"" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" podUID="5fb13a32-67c3-46b1-a0b8-573e941e6c7e" Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.119211 4865 scope.go:117] "RemoveContainer" containerID="7090bd800d91f637ccaa1100f5ceee8639300992d193dba3e886280899e7ce41" Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.838138 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-szb9h" event={"ID":"3dee20a9-c14d-4a42-afb1-87d126996c56","Type":"ContainerStarted","Data":"57885233ed4d76e6e18d82eb6ee88efcf7a2e89fd1e0fa5be017f1940dcca4e6"} Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.838560 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-szb9h" Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.844968 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" event={"ID":"dbfec6f5-80b4-480f-a958-c3107b2776c0","Type":"ContainerStarted","Data":"9330020492a51301c48928cfd886ca6757b164192b8026823539ced4ce95b353"} Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.845192 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.847986 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcx8b" event={"ID":"54370b2c-cc27-4800-ad14-d61df7b4c73d","Type":"ContainerStarted","Data":"209a105b4626c03551b1a98e527566480a963aaf424e3d3ab9af992f67a30ece"} Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.849955 4865 generic.go:334] "Generic (PLEG): container finished" podID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerID="61373ff7c1a57ae794504d536130891cb4bca5ce8cb51a557f18b31b3d638fb1" exitCode=0 Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.850002 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmfb" event={"ID":"518bbe98-a0a6-4693-babd-dcd94b8897c6","Type":"ContainerDied","Data":"61373ff7c1a57ae794504d536130891cb4bca5ce8cb51a557f18b31b3d638fb1"} Jan 23 12:59:55 crc kubenswrapper[4865]: I0123 12:59:55.922441 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tcx8b" podStartSLOduration=5.999472904 podStartE2EDuration="11.922412658s" podCreationTimestamp="2026-01-23 12:59:44 +0000 UTC" firstStartedPulling="2026-01-23 12:59:46.739020485 +0000 UTC m=+4030.908092711" lastFinishedPulling="2026-01-23 12:59:52.661960239 +0000 UTC m=+4036.831032465" observedRunningTime="2026-01-23 12:59:55.92164623 +0000 UTC m=+4040.090718456" watchObservedRunningTime="2026-01-23 12:59:55.922412658 +0000 UTC m=+4040.091484884" Jan 23 12:59:56 crc kubenswrapper[4865]: I0123 12:59:56.124950 4865 scope.go:117] "RemoveContainer" containerID="5031a9000ba13266ff563f20ed5d1051c6903c52bb120dfe5b49a8077462e6f4" Jan 23 12:59:56 crc kubenswrapper[4865]: I0123 12:59:56.125015 4865 scope.go:117] "RemoveContainer" containerID="5a717a28c3ab6313b6531cf15eb4cb0ab0c1f50459f10a50e3c41ba8d6a77900" Jan 23 12:59:56 crc kubenswrapper[4865]: E0123 12:59:56.125315 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=metallb-operator-controller-manager-7df9698d5d-lk94b_metallb-system(d1a0503d-3fc4-45b6-87c0-7af4a7246a4b)\"" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" podUID="d1a0503d-3fc4-45b6-87c0-7af4a7246a4b" Jan 23 12:59:56 crc kubenswrapper[4865]: I0123 12:59:56.126148 4865 scope.go:117] "RemoveContainer" containerID="4b92021aa738d8ce5a77d8b00baf08a60d2b581b8a7c210f45e831e89d21b25d" Jan 23 12:59:56 crc kubenswrapper[4865]: I0123 12:59:56.126199 4865 scope.go:117] "RemoveContainer" containerID="9a4139dfc969c2097fac96a7889d206938f804f3ca29b28428ed6f6ee614103d" Jan 23 12:59:56 crc kubenswrapper[4865]: E0123 12:59:56.126337 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=mariadb-operator-controller-manager-c87fff755-mlm5v_openstack-operators(d2f4bfa4-63e2-418a-b52a-75d2992af596)\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" podUID="d2f4bfa4-63e2-418a-b52a-75d2992af596" Jan 23 12:59:56 crc kubenswrapper[4865]: I0123 12:59:56.126495 4865 scope.go:117] "RemoveContainer" containerID="4b3f708fd26a8e6b4ad737d720f4693da3a7df553a47fa2c49c9d37673a2b97b" Jan 23 12:59:56 crc kubenswrapper[4865]: E0123 12:59:56.126809 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=infra-operator-controller-manager-54ccf4f85d-l6w6d_openstack-operators(2c3366d9-565f-4601-acbb-b473dcfe126c)\"" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" podUID="2c3366d9-565f-4601-acbb-b473dcfe126c" Jan 23 12:59:56 crc kubenswrapper[4865]: I0123 12:59:56.389461 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 12:59:57 crc kubenswrapper[4865]: I0123 12:59:57.118009 4865 scope.go:117] "RemoveContainer" containerID="1e8084a377095353f6c0a51b0c1ba9381b21e0ff984cdc9e3e7ce846d91eaaac" Jan 23 12:59:57 crc kubenswrapper[4865]: I0123 12:59:57.118298 4865 scope.go:117] "RemoveContainer" containerID="c1fccdff35bce6869db28dae53682a3777098670187d98b7b6c30ee3e2b62d82" Jan 23 12:59:57 crc kubenswrapper[4865]: E0123 12:59:57.118406 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=cinder-operator-controller-manager-69cf5d4557-9jp5b_openstack-operators(bdf8f14b-af0d-43cc-b624-7dab2879dc4b)\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" Jan 23 12:59:57 crc kubenswrapper[4865]: I0123 12:59:57.494720 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 23 12:59:57 crc kubenswrapper[4865]: I0123 12:59:57.879304 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gh89m" event={"ID":"9faffae5-73bb-4980-8092-b79a6888476d","Type":"ContainerStarted","Data":"18401e30c2dd6edf3948cf5f2debf8f9626e8de4007b77b83050a51210ccf77b"} Jan 23 12:59:57 crc kubenswrapper[4865]: I0123 12:59:57.879896 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gh89m" Jan 23 12:59:57 crc kubenswrapper[4865]: I0123 12:59:57.888661 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6497cbfbf6-fkmfr_60877fc9-78f8-4298-8104-8cd90e28d3bd/route-controller-manager/2.log" Jan 23 12:59:57 crc kubenswrapper[4865]: I0123 12:59:57.888774 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" event={"ID":"60877fc9-78f8-4298-8104-8cd90e28d3bd","Type":"ContainerStarted","Data":"30dc4936744dd36b3ba9b43cce40dc09fb416ab81baf45c86cfa431276b9df35"} Jan 23 12:59:57 crc kubenswrapper[4865]: I0123 12:59:57.889078 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 12:59:57 crc kubenswrapper[4865]: I0123 12:59:57.894907 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-fdkt9" event={"ID":"8e227974-40b8-4d16-8d5f-961b705a9740","Type":"ContainerStarted","Data":"8adeef153f5a9bfba83f17f3ca7c48fd09a05dfc56e4ac7d37336792bc05377b"} Jan 23 12:59:58 crc kubenswrapper[4865]: I0123 12:59:58.076248 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6497cbfbf6-fkmfr" Jan 23 12:59:58 crc kubenswrapper[4865]: I0123 12:59:58.119092 4865 scope.go:117] "RemoveContainer" containerID="7851990b7985c234bf995169fc54ba523c81a79f855cd468ec727349c938b02e" Jan 23 12:59:58 crc kubenswrapper[4865]: E0123 12:59:58.119344 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 12:59:58 crc kubenswrapper[4865]: I0123 12:59:58.418824 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4kkf6" Jan 23 12:59:58 crc kubenswrapper[4865]: I0123 12:59:58.633992 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 23 12:59:58 crc kubenswrapper[4865]: I0123 12:59:58.905003 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmfb" event={"ID":"518bbe98-a0a6-4693-babd-dcd94b8897c6","Type":"ContainerStarted","Data":"cc3a9fc439896ccf59918a1c6ca06cefdb6ecfc0ed3c3a518eb02feacf7a2808"} Jan 23 12:59:58 crc kubenswrapper[4865]: I0123 12:59:58.929766 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rxmfb" podStartSLOduration=4.034364559 podStartE2EDuration="13.929742274s" podCreationTimestamp="2026-01-23 12:59:45 +0000 UTC" firstStartedPulling="2026-01-23 12:59:47.752319421 +0000 UTC m=+4031.921391647" lastFinishedPulling="2026-01-23 12:59:57.647697136 +0000 UTC m=+4041.816769362" observedRunningTime="2026-01-23 12:59:58.924957259 +0000 UTC m=+4043.094029505" watchObservedRunningTime="2026-01-23 12:59:58.929742274 +0000 UTC m=+4043.098814500" Jan 23 12:59:59 crc kubenswrapper[4865]: I0123 12:59:59.100340 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 12:59:59 crc kubenswrapper[4865]: I0123 12:59:59.118662 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 12:59:59 crc kubenswrapper[4865]: E0123 12:59:59.118954 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 12:59:59 crc kubenswrapper[4865]: I0123 12:59:59.119504 4865 scope.go:117] "RemoveContainer" containerID="3912e2bb7e2743a31cd10155cf739bec788dcd478e4d809dd42b097119e32895" Jan 23 12:59:59 crc kubenswrapper[4865]: E0123 12:59:59.119725 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 12:59:59 crc kubenswrapper[4865]: I0123 12:59:59.120381 4865 scope.go:117] "RemoveContainer" containerID="2cde1245521e015d0aae40d1c823114ec04701b03e302c916e892d6708eb497c" Jan 23 12:59:59 crc kubenswrapper[4865]: E0123 12:59:59.120548 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 12:59:59 crc kubenswrapper[4865]: I0123 12:59:59.121069 4865 scope.go:117] "RemoveContainer" containerID="b81b39a307d9441dfdf5f3dabb5507dd80e3fc704a9d4ee320d541a2a4b82254" Jan 23 12:59:59 crc kubenswrapper[4865]: I0123 12:59:59.121191 4865 scope.go:117] "RemoveContainer" containerID="37863c5fef2db414e26d4bc0b4d8291a7aa8d9062cb97e690a84bea171062f6d" Jan 23 12:59:59 crc kubenswrapper[4865]: E0123 12:59:59.121285 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 12:59:59 crc kubenswrapper[4865]: E0123 12:59:59.121608 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"olm-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=olm-operator pod=olm-operator-6b444d44fb-g5xkl_openshift-operator-lifecycle-manager(582f83b4-97dc-4f56-9879-c73fab80488a)\"" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" podUID="582f83b4-97dc-4f56-9879-c73fab80488a" Jan 23 12:59:59 crc kubenswrapper[4865]: I0123 12:59:59.121795 4865 scope.go:117] "RemoveContainer" containerID="46ffc542f334f599ae5569589549c116c3469a2659c4d68570d904f03567bbfb" Jan 23 12:59:59 crc kubenswrapper[4865]: E0123 12:59:59.122034 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 12:59:59 crc kubenswrapper[4865]: I0123 12:59:59.122170 4865 scope.go:117] "RemoveContainer" containerID="c75b7b4a0ab86df22883e2a2e433ff69599920974bf67d9880538a7572f33d8f" Jan 23 12:59:59 crc kubenswrapper[4865]: E0123 12:59:59.122484 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"frr-k8s-webhook-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=frr-k8s-webhook-server pod=frr-k8s-webhook-server-7df86c4f6c-dkvk4_metallb-system(4116044f-0cc3-41fb-9f26-536213e1dfa3)\"" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" podUID="4116044f-0cc3-41fb-9f26-536213e1dfa3" Jan 23 12:59:59 crc kubenswrapper[4865]: I0123 12:59:59.179425 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-m96k8" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.118223 4865 scope.go:117] "RemoveContainer" containerID="db5b8fe7b7736bdd3f92d1d332b37184bf21bfd7a6ea6dcadb3721bb397f71d3" Jan 23 13:00:00 crc kubenswrapper[4865]: E0123 13:00:00.118775 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=neutron-operator-controller-manager-5d8f59fb49-hnv8g_openstack-operators(429b62c2-b748-40b1-b00f-a1b0488fc5d0)\"" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" podUID="429b62c2-b748-40b1-b00f-a1b0488fc5d0" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.175035 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw"] Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.176797 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.178556 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.179026 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.208269 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw"] Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.209034 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-secret-volume\") pod \"collect-profiles-29486220-sz5xw\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.209182 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfjgw\" (UniqueName: \"kubernetes.io/projected/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-kube-api-access-kfjgw\") pod \"collect-profiles-29486220-sz5xw\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.209249 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-config-volume\") pod \"collect-profiles-29486220-sz5xw\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.310700 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-secret-volume\") pod \"collect-profiles-29486220-sz5xw\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.310784 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfjgw\" (UniqueName: \"kubernetes.io/projected/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-kube-api-access-kfjgw\") pod \"collect-profiles-29486220-sz5xw\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.310818 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-config-volume\") pod \"collect-profiles-29486220-sz5xw\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.312356 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-config-volume\") pod \"collect-profiles-29486220-sz5xw\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.318261 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-secret-volume\") pod \"collect-profiles-29486220-sz5xw\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.328217 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfjgw\" (UniqueName: \"kubernetes.io/projected/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-kube-api-access-kfjgw\") pod \"collect-profiles-29486220-sz5xw\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.492525 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.494770 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.495513 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d7d54b946-29gbz" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.862935 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.864433 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.867265 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.867970 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rsw8g" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.868021 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.868579 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.889030 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.924994 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.925079 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgfhs\" (UniqueName: \"kubernetes.io/projected/62cb5904-6543-42ea-8a83-ba0681efa497-kube-api-access-wgfhs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.925125 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.925174 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.925215 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.925269 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.925345 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.925373 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:00 crc kubenswrapper[4865]: I0123 13:00:00.925406 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.026792 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.027092 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.027129 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgfhs\" (UniqueName: \"kubernetes.io/projected/62cb5904-6543-42ea-8a83-ba0681efa497-kube-api-access-wgfhs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.027157 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.027189 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.027213 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.027247 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.027313 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.027331 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.030428 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.030819 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.032953 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.034145 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.035274 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.035695 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.042733 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.042793 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.057292 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgfhs\" (UniqueName: \"kubernetes.io/projected/62cb5904-6543-42ea-8a83-ba0681efa497-kube-api-access-wgfhs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.067704 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.118198 4865 scope.go:117] "RemoveContainer" containerID="e8d46240db8ddaba7757c9918aece7afc99fd17fa4b6e7ff56182c39c3b3c320" Jan 23 13:00:01 crc kubenswrapper[4865]: E0123 13:00:01.118480 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=csi-provisioner pod=csi-hostpathplugin-g7l9x_hostpath-provisioner(f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb)\"" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" podUID="f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.119061 4865 scope.go:117] "RemoveContainer" containerID="893186b97fd348cbc78b5583c9dee4848834f0940caa5a09f42c8b91d8258985" Jan 23 13:00:01 crc kubenswrapper[4865]: E0123 13:00:01.119234 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-cf98fcc89-7kqtt_cert-manager(15434cef-8cb6-4386-b761-143f1819cac8)\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" podUID="15434cef-8cb6-4386-b761-143f1819cac8" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.242212 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.385737 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.407567 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw"] Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.951708 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" event={"ID":"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b","Type":"ContainerStarted","Data":"04a035152bfaf183fbc44321c6ff658a674907714b95a2d7fd0da6023fa5d9da"} Jan 23 13:00:01 crc kubenswrapper[4865]: I0123 13:00:01.971330 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 13:00:02 crc kubenswrapper[4865]: I0123 13:00:02.118573 4865 scope.go:117] "RemoveContainer" containerID="c9af27a61d3dc1d7f7b2d575fbafd518ca1e36a8d58fe436f0ce27465d9bdc48" Jan 23 13:00:02 crc kubenswrapper[4865]: I0123 13:00:02.118670 4865 scope.go:117] "RemoveContainer" containerID="2d16a5acbf212571adc93c0dc779c779aed1484718b55ebd8d904c177e523c24" Jan 23 13:00:02 crc kubenswrapper[4865]: E0123 13:00:02.118970 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=controller pod=controller-6968d8fdc4-8bjkz_metallb-system(3685d2b2-151b-479a-92c1-ae400eacd1b9)\"" pod="metallb-system/controller-6968d8fdc4-8bjkz" podUID="3685d2b2-151b-479a-92c1-ae400eacd1b9" Jan 23 13:00:02 crc kubenswrapper[4865]: E0123 13:00:02.118993 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=glance-operator-controller-manager-78fdd796fd-8qtnc_openstack-operators(da1cf187-8918-46b4-ab33-e8912c9d0dd6)\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" podUID="da1cf187-8918-46b4-ab33-e8912c9d0dd6" Jan 23 13:00:02 crc kubenswrapper[4865]: I0123 13:00:02.119003 4865 scope.go:117] "RemoveContainer" containerID="a662702efe9e87379f485c54ea659eea964b9ab589f691c6f62222a3e3f9537a" Jan 23 13:00:02 crc kubenswrapper[4865]: E0123 13:00:02.119194 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with CrashLoopBackOff: \"back-off 40s restarting failed container=webhook-server pod=metallb-operator-webhook-server-78f5776895-s7hqg_metallb-system(9177b0d0-3ce7-40fe-8567-85cb8dd5227a)\"" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" podUID="9177b0d0-3ce7-40fe-8567-85cb8dd5227a" Jan 23 13:00:02 crc kubenswrapper[4865]: I0123 13:00:02.717914 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 23 13:00:02 crc kubenswrapper[4865]: I0123 13:00:02.961110 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"62cb5904-6543-42ea-8a83-ba0681efa497","Type":"ContainerStarted","Data":"dc7189744acafb4f684614bef2c05b2cd2646b6460fe5b8077521aa37d937f77"} Jan 23 13:00:02 crc kubenswrapper[4865]: I0123 13:00:02.963533 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" event={"ID":"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b","Type":"ContainerStarted","Data":"c6660fbd52d9f76506b5250045bef7b06fbd8dee9e101be06697d6c0b2a98c9a"} Jan 23 13:00:03 crc kubenswrapper[4865]: I0123 13:00:03.002316 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" podStartSLOduration=3.002298166 podStartE2EDuration="3.002298166s" podCreationTimestamp="2026-01-23 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:00:02.993213285 +0000 UTC m=+4047.162285511" watchObservedRunningTime="2026-01-23 13:00:03.002298166 +0000 UTC m=+4047.171370392" Jan 23 13:00:03 crc kubenswrapper[4865]: I0123 13:00:03.118070 4865 scope.go:117] "RemoveContainer" containerID="277f7fcd7eac07257ddfbc243d0a0fd950fbd5eaf54d5812499188c9ecf72e79" Jan 23 13:00:03 crc kubenswrapper[4865]: I0123 13:00:03.976338 4865 generic.go:334] "Generic (PLEG): container finished" podID="4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b" containerID="c6660fbd52d9f76506b5250045bef7b06fbd8dee9e101be06697d6c0b2a98c9a" exitCode=0 Jan 23 13:00:03 crc kubenswrapper[4865]: I0123 13:00:03.976739 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" event={"ID":"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b","Type":"ContainerDied","Data":"c6660fbd52d9f76506b5250045bef7b06fbd8dee9e101be06697d6c0b2a98c9a"} Jan 23 13:00:03 crc kubenswrapper[4865]: I0123 13:00:03.979848 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" event={"ID":"93194445-a021-4960-ab82-085f13cc959d","Type":"ContainerStarted","Data":"bbe7bbd51a9e410bab64537b080dd7028e555e718528296306173c816b207475"} Jan 23 13:00:03 crc kubenswrapper[4865]: I0123 13:00:03.981027 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 13:00:04 crc kubenswrapper[4865]: I0123 13:00:04.358014 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 13:00:05 crc kubenswrapper[4865]: I0123 13:00:05.007777 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 13:00:05 crc kubenswrapper[4865]: I0123 13:00:05.007832 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 13:00:05 crc kubenswrapper[4865]: I0123 13:00:05.087125 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 13:00:05 crc kubenswrapper[4865]: I0123 13:00:05.118499 4865 scope.go:117] "RemoveContainer" containerID="86ef6bcfe1b3263f30677005188e0a48259b6d6dcdb05efcd30f92f3a527c545" Jan 23 13:00:05 crc kubenswrapper[4865]: I0123 13:00:05.146689 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rrxrn" Jan 23 13:00:05 crc kubenswrapper[4865]: I0123 13:00:05.545249 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rxmfb" Jan 23 13:00:05 crc kubenswrapper[4865]: I0123 13:00:05.546482 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rxmfb" Jan 23 13:00:05 crc kubenswrapper[4865]: I0123 13:00:05.970863 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.002188 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" event={"ID":"661fbfd2-7d52-419a-943f-c57854d2306b","Type":"ContainerStarted","Data":"6d664763bbb5c115a39c42b2121c33b934a058d8e9724b2af8f717712155c2c7"} Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.003559 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.004973 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" event={"ID":"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b","Type":"ContainerDied","Data":"04a035152bfaf183fbc44321c6ff658a674907714b95a2d7fd0da6023fa5d9da"} Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.005003 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04a035152bfaf183fbc44321c6ff658a674907714b95a2d7fd0da6023fa5d9da" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.005057 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486220-sz5xw" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.009245 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"62cb5904-6543-42ea-8a83-ba0681efa497","Type":"ContainerStarted","Data":"4818e5edbc7d9dde4943aa4dab788cb6fb3f331fd6138efa3cd900edf294a61e"} Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.047969 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=6.047948542 podStartE2EDuration="6.047948542s" podCreationTimestamp="2026-01-23 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:00:06.040613204 +0000 UTC m=+4050.209685430" watchObservedRunningTime="2026-01-23 13:00:06.047948542 +0000 UTC m=+4050.217020768" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.061244 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-config-volume\") pod \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.061710 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.061987 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-config-volume" (OuterVolumeSpecName: "config-volume") pod "4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b" (UID: "4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.062224 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfjgw\" (UniqueName: \"kubernetes.io/projected/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-kube-api-access-kfjgw\") pod \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.062299 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-secret-volume\") pod \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\" (UID: \"4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b\") " Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.062783 4865 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.073840 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-kube-api-access-kfjgw" (OuterVolumeSpecName: "kube-api-access-kfjgw") pod "4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b" (UID: "4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b"). InnerVolumeSpecName "kube-api-access-kfjgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.085711 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b" (UID: "4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.121472 4865 scope.go:117] "RemoveContainer" containerID="5ee6b1222e9411f014bd61cf294c67ddc555f8ce71c7f5164add3cbb62c2ca98" Jan 23 13:00:06 crc kubenswrapper[4865]: E0123 13:00:06.121778 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=openstack-operator-controller-manager-76c5c47f8f-p49qh_openstack-operators(b2ea2452-dc3b-4b93-a9d4-e562a63111c9)\"" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" podUID="b2ea2452-dc3b-4b93-a9d4-e562a63111c9" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.130867 4865 scope.go:117] "RemoveContainer" containerID="38c3a6ccccdf9a9b276e9ee0f9aa09c7c93bfb6ad347dc4b24a986fb1d05d602" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.130933 4865 scope.go:117] "RemoveContainer" containerID="302afc1aa2f5954d846f9b947677f513aa4903bdf8789eb58d4cbe0e85645cff" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.131122 4865 scope.go:117] "RemoveContainer" containerID="486a01a99c4c9f08efb4c22aa2be31250963090a89b5e968702a63403a9f6476" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.131744 4865 scope.go:117] "RemoveContainer" containerID="8b34ebfddc106574cbd9dc578c5c1e78a7af49d50692df0213f0a801b0d40728" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.131834 4865 scope.go:117] "RemoveContainer" containerID="820b838e41932b51e492781e46a62aba87e5bd3fe1ad917198cfd13ce68996af" Jan 23 13:00:06 crc kubenswrapper[4865]: E0123 13:00:06.132010 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-7xpgm_openshift-marketplace(189c80ac-7038-4b48-bebb-5c5d7e2cd362)\"" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" podUID="189c80ac-7038-4b48-bebb-5c5d7e2cd362" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.164752 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfjgw\" (UniqueName: \"kubernetes.io/projected/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-kube-api-access-kfjgw\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.164904 4865 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.384285 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" podUID="1405b73d-070d-495e-a80d-46fc2505ff8c" containerName="cert-manager-webhook" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.616518 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rxmfb" podUID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerName="registry-server" probeResult="failure" output=< Jan 23 13:00:06 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 13:00:06 crc kubenswrapper[4865]: > Jan 23 13:00:06 crc kubenswrapper[4865]: I0123 13:00:06.686108 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.018042 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" event={"ID":"0167f850-ba43-426a-8c56-aa171131e7da","Type":"ContainerStarted","Data":"a6dca55f2097613fa782d5ad74ca08e3a322fd1040ae5b00ccc7243dfbfd90c9"} Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.020448 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" event={"ID":"e92ddc14-bdb6-4407-b8a3-047079030166","Type":"ContainerStarted","Data":"14a056ea243672fbed340a421c79aa3d7fc610dd53223f41a9e3524b6c360cb4"} Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.020653 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.031664 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" event={"ID":"8ef0fdaa-8086-467d-8106-5c6dec532dba","Type":"ContainerStarted","Data":"7f8a8d2ba0d3a04f4eaa91eaefb27c15d0f2d70222dcc1171c4e64aeac27355d"} Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.031826 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.033559 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" event={"ID":"5fb13a32-67c3-46b1-a0b8-573e941e6c7e","Type":"ContainerStarted","Data":"7832f24da4a042361375237eb61feb8c21e20df659dd96c7bd2911c57e180468"} Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.118305 4865 scope.go:117] "RemoveContainer" containerID="4b92021aa738d8ce5a77d8b00baf08a60d2b581b8a7c210f45e831e89d21b25d" Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.118893 4865 scope.go:117] "RemoveContainer" containerID="b8fc2540ac6be036b476d713f6f32c61384132a04b49c6695b63e036120ddd4b" Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.166860 4865 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-hlqct" Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.257338 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-kkkcn" Jan 23 13:00:07 crc kubenswrapper[4865]: I0123 13:00:07.495967 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 13:00:08 crc kubenswrapper[4865]: I0123 13:00:08.043828 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" event={"ID":"967c3782-1bce-4145-8244-7650fe19dc22","Type":"ContainerStarted","Data":"8fbeeb426a4e964073cd5a7d5983637c5f179fac1ace23d89af07ae34972e4a9"} Jan 23 13:00:08 crc kubenswrapper[4865]: I0123 13:00:08.044100 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 13:00:08 crc kubenswrapper[4865]: I0123 13:00:08.047986 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" event={"ID":"d2f4bfa4-63e2-418a-b52a-75d2992af596","Type":"ContainerStarted","Data":"10d9b449997bf4d4e8e65c4b8fe9e922af14e469c1decf962d5ba9f7c01f66b8"} Jan 23 13:00:08 crc kubenswrapper[4865]: I0123 13:00:08.117929 4865 scope.go:117] "RemoveContainer" containerID="dac04c172b4cbd023a80707128cb43258f9d6203fd72be3fda5736dc24798b27" Jan 23 13:00:08 crc kubenswrapper[4865]: I0123 13:00:08.117968 4865 scope.go:117] "RemoveContainer" containerID="5a717a28c3ab6313b6531cf15eb4cb0ab0c1f50459f10a50e3c41ba8d6a77900" Jan 23 13:00:08 crc kubenswrapper[4865]: I0123 13:00:08.118560 4865 scope.go:117] "RemoveContainer" containerID="bd042d4838673f9732c4e8c413fa92e5a1c5e88525bd5aeef263d3b1e9d83000" Jan 23 13:00:08 crc kubenswrapper[4865]: I0123 13:00:08.594326 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tcx8b"] Jan 23 13:00:08 crc kubenswrapper[4865]: I0123 13:00:08.594792 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tcx8b" podUID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerName="registry-server" containerID="cri-o://209a105b4626c03551b1a98e527566480a963aaf424e3d3ab9af992f67a30ece" gracePeriod=2 Jan 23 13:00:08 crc kubenswrapper[4865]: I0123 13:00:08.900751 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7"] Jan 23 13:00:08 crc kubenswrapper[4865]: I0123 13:00:08.908694 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486175-j8lb7"] Jan 23 13:00:09 crc kubenswrapper[4865]: I0123 13:00:09.060183 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" event={"ID":"10627175-8e39-4799-bec7-c0b49b938a29","Type":"ContainerStarted","Data":"b4d65a8f5940f45a49768f0c0aa2c5435088db89b6dc0335c8c7ac6946677705"} Jan 23 13:00:09 crc kubenswrapper[4865]: I0123 13:00:09.061483 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 13:00:09 crc kubenswrapper[4865]: I0123 13:00:09.064075 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" event={"ID":"1959a742-ade2-4266-9a93-e96a1b6e3908","Type":"ContainerStarted","Data":"4420e2d6e3bf1f5f53ac1711e91adb6950df7d4aab8132681eb2d2ca87046736"} Jan 23 13:00:09 crc kubenswrapper[4865]: I0123 13:00:09.064277 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 13:00:09 crc kubenswrapper[4865]: I0123 13:00:09.068946 4865 generic.go:334] "Generic (PLEG): container finished" podID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerID="209a105b4626c03551b1a98e527566480a963aaf424e3d3ab9af992f67a30ece" exitCode=0 Jan 23 13:00:09 crc kubenswrapper[4865]: I0123 13:00:09.069026 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcx8b" event={"ID":"54370b2c-cc27-4800-ad14-d61df7b4c73d","Type":"ContainerDied","Data":"209a105b4626c03551b1a98e527566480a963aaf424e3d3ab9af992f67a30ece"} Jan 23 13:00:09 crc kubenswrapper[4865]: I0123 13:00:09.071855 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" event={"ID":"d1a0503d-3fc4-45b6-87c0-7af4a7246a4b","Type":"ContainerStarted","Data":"70c25d427f8a38f09a9554ccfb5d0d62ff7a08c4e7c10751c353b457c5cc05e5"} Jan 23 13:00:09 crc kubenswrapper[4865]: I0123 13:00:09.072178 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.082714 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tcx8b" event={"ID":"54370b2c-cc27-4800-ad14-d61df7b4c73d","Type":"ContainerDied","Data":"e293099ad32e8d9c4235bc2d2945d450d7393de78dc9ccd310764f9dccaf8466"} Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.083047 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e293099ad32e8d9c4235bc2d2945d450d7393de78dc9ccd310764f9dccaf8466" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.123573 4865 scope.go:117] "RemoveContainer" containerID="37863c5fef2db414e26d4bc0b4d8291a7aa8d9062cb97e690a84bea171062f6d" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.125454 4865 scope.go:117] "RemoveContainer" containerID="4b3f708fd26a8e6b4ad737d720f4693da3a7df553a47fa2c49c9d37673a2b97b" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.125818 4865 scope.go:117] "RemoveContainer" containerID="2cde1245521e015d0aae40d1c823114ec04701b03e302c916e892d6708eb497c" Jan 23 13:00:10 crc kubenswrapper[4865]: E0123 13:00:10.126020 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=manila-operator-controller-manager-78c6999f6f-bps6b_openstack-operators(a9bb243e-e7c3-4f68-be35-d86fa049c570)\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" podUID="a9bb243e-e7c3-4f68-be35-d86fa049c570" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.126079 4865 scope.go:117] "RemoveContainer" containerID="1e8084a377095353f6c0a51b0c1ba9381b21e0ff984cdc9e3e7ce846d91eaaac" Jan 23 13:00:10 crc kubenswrapper[4865]: E0123 13:00:10.126241 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=cinder-operator-controller-manager-69cf5d4557-9jp5b_openstack-operators(bdf8f14b-af0d-43cc-b624-7dab2879dc4b)\"" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" podUID="bdf8f14b-af0d-43cc-b624-7dab2879dc4b" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.126934 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.135038 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0efc0078-ae1e-44d9-b57f-361da731424b" path="/var/lib/kubelet/pods/0efc0078-ae1e-44d9-b57f-361da731424b/volumes" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.281830 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.361165 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-catalog-content\") pod \"54370b2c-cc27-4800-ad14-d61df7b4c73d\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.361282 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4xfx\" (UniqueName: \"kubernetes.io/projected/54370b2c-cc27-4800-ad14-d61df7b4c73d-kube-api-access-g4xfx\") pod \"54370b2c-cc27-4800-ad14-d61df7b4c73d\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.361507 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-utilities\") pod \"54370b2c-cc27-4800-ad14-d61df7b4c73d\" (UID: \"54370b2c-cc27-4800-ad14-d61df7b4c73d\") " Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.369996 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-utilities" (OuterVolumeSpecName: "utilities") pod "54370b2c-cc27-4800-ad14-d61df7b4c73d" (UID: "54370b2c-cc27-4800-ad14-d61df7b4c73d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.387650 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54370b2c-cc27-4800-ad14-d61df7b4c73d-kube-api-access-g4xfx" (OuterVolumeSpecName: "kube-api-access-g4xfx") pod "54370b2c-cc27-4800-ad14-d61df7b4c73d" (UID: "54370b2c-cc27-4800-ad14-d61df7b4c73d"). InnerVolumeSpecName "kube-api-access-g4xfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.445358 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54370b2c-cc27-4800-ad14-d61df7b4c73d" (UID: "54370b2c-cc27-4800-ad14-d61df7b4c73d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.464760 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.465000 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54370b2c-cc27-4800-ad14-d61df7b4c73d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.465089 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4xfx\" (UniqueName: \"kubernetes.io/projected/54370b2c-cc27-4800-ad14-d61df7b4c73d-kube-api-access-g4xfx\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:10 crc kubenswrapper[4865]: I0123 13:00:10.476588 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-zwqgn" Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.094045 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-6b444d44fb-g5xkl_582f83b4-97dc-4f56-9879-c73fab80488a/olm-operator/4.log" Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.095186 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" event={"ID":"582f83b4-97dc-4f56-9879-c73fab80488a","Type":"ContainerStarted","Data":"82a3bb6d2d381daf60b2fb9b228c981d438c32b77f7ab36bdf827992e9bf5559"} Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.095825 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.096678 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" event={"ID":"2c3366d9-565f-4601-acbb-b473dcfe126c","Type":"ContainerStarted","Data":"f51743bfb4b0c91280db35190d1f316af5d621661a5b619626aa4b84586cdad1"} Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.096702 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tcx8b" Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.097935 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.109620 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-g5xkl" Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.117994 4865 scope.go:117] "RemoveContainer" containerID="7851990b7985c234bf995169fc54ba523c81a79f855cd468ec727349c938b02e" Jan 23 13:00:11 crc kubenswrapper[4865]: E0123 13:00:11.118235 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=octavia-operator-controller-manager-7bd9774b6-bqtq9_openstack-operators(6d4fbfc8-900e-4c44-a458-039d37a6dd40)\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" podUID="6d4fbfc8-900e-4c44-a458-039d37a6dd40" Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.160217 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tcx8b"] Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.173959 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tcx8b"] Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.240232 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.384130 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-x972r" Jan 23 13:00:11 crc kubenswrapper[4865]: I0123 13:00:11.593236 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 13:00:12 crc kubenswrapper[4865]: I0123 13:00:12.117992 4865 scope.go:117] "RemoveContainer" containerID="db5b8fe7b7736bdd3f92d1d332b37184bf21bfd7a6ea6dcadb3721bb397f71d3" Jan 23 13:00:12 crc kubenswrapper[4865]: I0123 13:00:12.148351 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54370b2c-cc27-4800-ad14-d61df7b4c73d" path="/var/lib/kubelet/pods/54370b2c-cc27-4800-ad14-d61df7b4c73d/volumes" Jan 23 13:00:13 crc kubenswrapper[4865]: I0123 13:00:13.118331 4865 scope.go:117] "RemoveContainer" containerID="46ffc542f334f599ae5569589549c116c3469a2659c4d68570d904f03567bbfb" Jan 23 13:00:13 crc kubenswrapper[4865]: I0123 13:00:13.119229 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" event={"ID":"429b62c2-b748-40b1-b00f-a1b0488fc5d0","Type":"ContainerStarted","Data":"49ac22c82608df4104463894269fd583edd02a4dc47d04628a9fe4b2d72ed75d"} Jan 23 13:00:13 crc kubenswrapper[4865]: E0123 13:00:13.119799 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=horizon-operator-controller-manager-77d5c5b54f-qftlt_openstack-operators(6aca96af-acfa-4c68-a2f4-ed19f08ddc4e)\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" podUID="6aca96af-acfa-4c68-a2f4-ed19f08ddc4e" Jan 23 13:00:13 crc kubenswrapper[4865]: I0123 13:00:13.120270 4865 scope.go:117] "RemoveContainer" containerID="c75b7b4a0ab86df22883e2a2e433ff69599920974bf67d9880538a7572f33d8f" Jan 23 13:00:13 crc kubenswrapper[4865]: I0123 13:00:13.120764 4865 scope.go:117] "RemoveContainer" containerID="3912e2bb7e2743a31cd10155cf739bec788dcd478e4d809dd42b097119e32895" Jan 23 13:00:13 crc kubenswrapper[4865]: I0123 13:00:13.120891 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 13:00:13 crc kubenswrapper[4865]: E0123 13:00:13.121867 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"package-server-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=package-server-manager pod=package-server-manager-789f6589d5-4g249_openshift-operator-lifecycle-manager(2c1ba660-8691-49e2-b0cc-056355d82f4c)\"" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" podUID="2c1ba660-8691-49e2-b0cc-056355d82f4c" Jan 23 13:00:13 crc kubenswrapper[4865]: I0123 13:00:13.182311 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jh6tv" Jan 23 13:00:13 crc kubenswrapper[4865]: I0123 13:00:13.699884 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 23 13:00:13 crc kubenswrapper[4865]: I0123 13:00:13.780036 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gh89m" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.106202 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.122304 4865 scope.go:117] "RemoveContainer" containerID="2d16a5acbf212571adc93c0dc779c779aed1484718b55ebd8d904c177e523c24" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.122678 4865 scope.go:117] "RemoveContainer" containerID="b81b39a307d9441dfdf5f3dabb5507dd80e3fc704a9d4ee320d541a2a4b82254" Jan 23 13:00:14 crc kubenswrapper[4865]: E0123 13:00:14.122919 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=manager pod=placement-operator-controller-manager-5d646b7d76-7fdbl_openstack-operators(fb9fb53a-b18e-4291-ab1b-83ac2fd78a73)\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" podUID="fb9fb53a-b18e-4291-ab1b-83ac2fd78a73" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.123470 4865 scope.go:117] "RemoveContainer" containerID="893186b97fd348cbc78b5583c9dee4848834f0940caa5a09f42c8b91d8258985" Jan 23 13:00:14 crc kubenswrapper[4865]: E0123 13:00:14.123832 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-cainjector\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cert-manager-cainjector pod=cert-manager-cainjector-cf98fcc89-7kqtt_cert-manager(15434cef-8cb6-4386-b761-143f1819cac8)\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" podUID="15434cef-8cb6-4386-b761-143f1819cac8" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.126276 4865 scope.go:117] "RemoveContainer" containerID="c9af27a61d3dc1d7f7b2d575fbafd518ca1e36a8d58fe436f0ce27465d9bdc48" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.134059 4865 generic.go:334] "Generic (PLEG): container finished" podID="c728912d-821c-4759-b175-3fd4324ad4f2" containerID="c74f0d480b376c78be8098a723dd25f8263d240a2395a6275f1e9ce7a869a41f" exitCode=137 Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.134128 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c728912d-821c-4759-b175-3fd4324ad4f2","Type":"ContainerDied","Data":"c74f0d480b376c78be8098a723dd25f8263d240a2395a6275f1e9ce7a869a41f"} Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.134157 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c728912d-821c-4759-b175-3fd4324ad4f2","Type":"ContainerStarted","Data":"303618cde26d189684c432a6eb235544113abe07dbb6f5e45943109128b728f0"} Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.134177 4865 scope.go:117] "RemoveContainer" containerID="69363188c023ec037365e6462967a0eb9169a136bc3d2131e45cd5a55c949188" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.137984 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.139789 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" event={"ID":"4116044f-0cc3-41fb-9f26-536213e1dfa3","Type":"ContainerStarted","Data":"f2ae4b1743638a26ea295e06e6d1edcf9ef3dd57378e00030682ffd93aab90a3"} Jan 23 13:00:14 crc kubenswrapper[4865]: E0123 13:00:14.140341 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.142565 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.355475 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.699856 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 23 13:00:14 crc kubenswrapper[4865]: I0123 13:00:14.807217 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-szb9h" Jan 23 13:00:15 crc kubenswrapper[4865]: I0123 13:00:15.119242 4865 scope.go:117] "RemoveContainer" containerID="e8d46240db8ddaba7757c9918aece7afc99fd17fa4b6e7ff56182c39c3b3c320" Jan 23 13:00:15 crc kubenswrapper[4865]: I0123 13:00:15.152805 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" event={"ID":"da1cf187-8918-46b4-ab33-e8912c9d0dd6","Type":"ContainerStarted","Data":"ddc0bdf9ae060dad438f1f4ba453240ba83b477bd9268e0828691f148dd129d1"} Jan 23 13:00:15 crc kubenswrapper[4865]: I0123 13:00:15.551476 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-m5pb4" Jan 23 13:00:15 crc kubenswrapper[4865]: I0123 13:00:15.820971 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rxmfb" Jan 23 13:00:15 crc kubenswrapper[4865]: I0123 13:00:15.881544 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rxmfb" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.125227 4865 scope.go:117] "RemoveContainer" containerID="a662702efe9e87379f485c54ea659eea964b9ab589f691c6f62222a3e3f9537a" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.171265 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-g7l9x" event={"ID":"f7d3e9f0-4ed8-427a-bca3-b0403d23d8fb","Type":"ContainerStarted","Data":"ad35d7a640e47469ccb9f83d7647ab5025e7ba5fe694cf6c447257ee06bf31e2"} Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.177235 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8bjkz" event={"ID":"3685d2b2-151b-479a-92c1-ae400eacd1b9","Type":"ContainerStarted","Data":"67ac098825ea87453c121a9c15f6ce4c375aecc57923cab5c3a62f1199288321"} Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.177284 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.218060 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.227987 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59dd8b7cbf-nppmq" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.274927 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-4c94z" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.356295 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.359972 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fsch6" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.625203 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-69d6c9f5b8-h6dkp" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.702462 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.712424 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.744139 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.746815 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-mlm5v" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.977782 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-cbz92" Jan 23 13:00:16 crc kubenswrapper[4865]: I0123 13:00:16.978703 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6b8bc8d87d-6t8ts" Jan 23 13:00:17 crc kubenswrapper[4865]: I0123 13:00:17.016227 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxmfb"] Jan 23 13:00:17 crc kubenswrapper[4865]: I0123 13:00:17.118347 4865 scope.go:117] "RemoveContainer" containerID="5ee6b1222e9411f014bd61cf294c67ddc555f8ce71c7f5164add3cbb62c2ca98" Jan 23 13:00:17 crc kubenswrapper[4865]: I0123 13:00:17.118880 4865 scope.go:117] "RemoveContainer" containerID="8b34ebfddc106574cbd9dc578c5c1e78a7af49d50692df0213f0a801b0d40728" Jan 23 13:00:17 crc kubenswrapper[4865]: I0123 13:00:17.303437 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" event={"ID":"9177b0d0-3ce7-40fe-8567-85cb8dd5227a","Type":"ContainerStarted","Data":"c026d210da90bee0b8a8629f74e9c541e44d2e625d02f740cf7115d935dfd576"} Jan 23 13:00:17 crc kubenswrapper[4865]: I0123 13:00:17.321438 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rxmfb" podUID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerName="registry-server" containerID="cri-o://cc3a9fc439896ccf59918a1c6ca06cefdb6ecfc0ed3c3a518eb02feacf7a2808" gracePeriod=2 Jan 23 13:00:17 crc kubenswrapper[4865]: I0123 13:00:17.325101 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-zm52l" Jan 23 13:00:17 crc kubenswrapper[4865]: I0123 13:00:17.328047 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5ffb9c6597-7mv2d" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.109102 4865 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.258676 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.283965 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.295745 4865 generic.go:334] "Generic (PLEG): container finished" podID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerID="cc3a9fc439896ccf59918a1c6ca06cefdb6ecfc0ed3c3a518eb02feacf7a2808" exitCode=0 Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.295806 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmfb" event={"ID":"518bbe98-a0a6-4693-babd-dcd94b8897c6","Type":"ContainerDied","Data":"cc3a9fc439896ccf59918a1c6ca06cefdb6ecfc0ed3c3a518eb02feacf7a2808"} Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.298058 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-7xpgm_189c80ac-7038-4b48-bebb-5c5d7e2cd362/marketplace-operator/4.log" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.298135 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" event={"ID":"189c80ac-7038-4b48-bebb-5c5d7e2cd362","Type":"ContainerStarted","Data":"50392e875a738e78a0b6bd6a1a3f9e2b37996a0f6ee28fdddfaa7673b77d5648"} Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.298578 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.300966 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" event={"ID":"b2ea2452-dc3b-4b93-a9d4-e562a63111c9","Type":"ContainerStarted","Data":"8c6fefe7d6788a8f7e9d83649d8a8fdbe4e95d9b853876ec0adab8badce6e0be"} Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.301552 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.303289 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-7xpgm" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.420811 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmfb" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.611143 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-catalog-content\") pod \"518bbe98-a0a6-4693-babd-dcd94b8897c6\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.611200 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-utilities\") pod \"518bbe98-a0a6-4693-babd-dcd94b8897c6\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.611251 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsrd2\" (UniqueName: \"kubernetes.io/projected/518bbe98-a0a6-4693-babd-dcd94b8897c6-kube-api-access-vsrd2\") pod \"518bbe98-a0a6-4693-babd-dcd94b8897c6\" (UID: \"518bbe98-a0a6-4693-babd-dcd94b8897c6\") " Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.612842 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-utilities" (OuterVolumeSpecName: "utilities") pod "518bbe98-a0a6-4693-babd-dcd94b8897c6" (UID: "518bbe98-a0a6-4693-babd-dcd94b8897c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.620831 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518bbe98-a0a6-4693-babd-dcd94b8897c6-kube-api-access-vsrd2" (OuterVolumeSpecName: "kube-api-access-vsrd2") pod "518bbe98-a0a6-4693-babd-dcd94b8897c6" (UID: "518bbe98-a0a6-4693-babd-dcd94b8897c6"). InnerVolumeSpecName "kube-api-access-vsrd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.656936 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "518bbe98-a0a6-4693-babd-dcd94b8897c6" (UID: "518bbe98-a0a6-4693-babd-dcd94b8897c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.715078 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.715122 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/518bbe98-a0a6-4693-babd-dcd94b8897c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:18 crc kubenswrapper[4865]: I0123 13:00:18.715136 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsrd2\" (UniqueName: \"kubernetes.io/projected/518bbe98-a0a6-4693-babd-dcd94b8897c6-kube-api-access-vsrd2\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:19 crc kubenswrapper[4865]: I0123 13:00:19.315059 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxmfb" Jan 23 13:00:19 crc kubenswrapper[4865]: I0123 13:00:19.315061 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxmfb" event={"ID":"518bbe98-a0a6-4693-babd-dcd94b8897c6","Type":"ContainerDied","Data":"ae4f6822454d8763fab15d3c29d55207273230b523fba6a330db877f7de8fb40"} Jan 23 13:00:19 crc kubenswrapper[4865]: I0123 13:00:19.315161 4865 scope.go:117] "RemoveContainer" containerID="cc3a9fc439896ccf59918a1c6ca06cefdb6ecfc0ed3c3a518eb02feacf7a2808" Jan 23 13:00:19 crc kubenswrapper[4865]: I0123 13:00:19.359225 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxmfb"] Jan 23 13:00:19 crc kubenswrapper[4865]: I0123 13:00:19.359575 4865 scope.go:117] "RemoveContainer" containerID="61373ff7c1a57ae794504d536130891cb4bca5ce8cb51a557f18b31b3d638fb1" Jan 23 13:00:19 crc kubenswrapper[4865]: I0123 13:00:19.376092 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rxmfb"] Jan 23 13:00:19 crc kubenswrapper[4865]: I0123 13:00:19.386466 4865 scope.go:117] "RemoveContainer" containerID="ab688b1ced078dc892d998b90385fa0fe764395f3fcdc0aa0a01e8a1899d14bd" Jan 23 13:00:20 crc kubenswrapper[4865]: I0123 13:00:20.131191 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518bbe98-a0a6-4693-babd-dcd94b8897c6" path="/var/lib/kubelet/pods/518bbe98-a0a6-4693-babd-dcd94b8897c6/volumes" Jan 23 13:00:22 crc kubenswrapper[4865]: I0123 13:00:22.015409 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-54ccf4f85d-l6w6d" Jan 23 13:00:22 crc kubenswrapper[4865]: I0123 13:00:22.654788 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 13:00:22 crc kubenswrapper[4865]: I0123 13:00:22.886689 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 23 13:00:23 crc kubenswrapper[4865]: I0123 13:00:23.339008 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 13:00:23 crc kubenswrapper[4865]: I0123 13:00:23.437053 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:00:23 crc kubenswrapper[4865]: I0123 13:00:23.796902 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dkvk4" Jan 23 13:00:25 crc kubenswrapper[4865]: I0123 13:00:25.118546 4865 scope.go:117] "RemoveContainer" containerID="2cde1245521e015d0aae40d1c823114ec04701b03e302c916e892d6708eb497c" Jan 23 13:00:25 crc kubenswrapper[4865]: I0123 13:00:25.118969 4865 scope.go:117] "RemoveContainer" containerID="1e8084a377095353f6c0a51b0c1ba9381b21e0ff984cdc9e3e7ce846d91eaaac" Jan 23 13:00:25 crc kubenswrapper[4865]: I0123 13:00:25.119290 4865 scope.go:117] "RemoveContainer" containerID="7851990b7985c234bf995169fc54ba523c81a79f855cd468ec727349c938b02e" Jan 23 13:00:25 crc kubenswrapper[4865]: I0123 13:00:25.119322 4865 scope.go:117] "RemoveContainer" containerID="b81b39a307d9441dfdf5f3dabb5507dd80e3fc704a9d4ee320d541a2a4b82254" Jan 23 13:00:25 crc kubenswrapper[4865]: I0123 13:00:25.119472 4865 scope.go:117] "RemoveContainer" containerID="3912e2bb7e2743a31cd10155cf739bec788dcd478e4d809dd42b097119e32895" Jan 23 13:00:25 crc kubenswrapper[4865]: I0123 13:00:25.119858 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:00:25 crc kubenswrapper[4865]: E0123 13:00:25.120064 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.342925 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-8qtnc" Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.419903 4865 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-789f6589d5-4g249_2c1ba660-8691-49e2-b0cc-056355d82f4c/package-server-manager/4.log" Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.421475 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" event={"ID":"2c1ba660-8691-49e2-b0cc-056355d82f4c","Type":"ContainerStarted","Data":"55a6a07c662dc997c25c10098c3ad704af740c52032fb0fb28e2d791c43a36f8"} Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.421750 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.432449 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" event={"ID":"fb9fb53a-b18e-4291-ab1b-83ac2fd78a73","Type":"ContainerStarted","Data":"ef27ac62ec58f7b86ae9f3dd5b696f79d3630ebc03990d40d8ff3261f563f660"} Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.433712 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.440966 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" event={"ID":"6d4fbfc8-900e-4c44-a458-039d37a6dd40","Type":"ContainerStarted","Data":"7060d33852b28166a3763e60ef29d3ee45f11a0f9da4533347df9bf7a64bc724"} Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.441421 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.457047 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" event={"ID":"bdf8f14b-af0d-43cc-b624-7dab2879dc4b","Type":"ContainerStarted","Data":"aa6776e0e2521a9a51ea0bb69b3c8c207738f085913c4167edffaac1f799983b"} Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.457708 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.458811 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" event={"ID":"a9bb243e-e7c3-4f68-be35-d86fa049c570","Type":"ContainerStarted","Data":"c5fa6e21143e6e3da20299ba85e5c441c253cdfe2e6ac6778596860f7f5c67ba"} Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.459245 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.817853 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 23 13:00:26 crc kubenswrapper[4865]: I0123 13:00:26.919303 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5d8f59fb49-hnv8g" Jan 23 13:00:27 crc kubenswrapper[4865]: I0123 13:00:27.118372 4865 scope.go:117] "RemoveContainer" containerID="46ffc542f334f599ae5569589549c116c3469a2659c4d68570d904f03567bbfb" Jan 23 13:00:27 crc kubenswrapper[4865]: I0123 13:00:27.468648 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" event={"ID":"6aca96af-acfa-4c68-a2f4-ed19f08ddc4e","Type":"ContainerStarted","Data":"e1c1fbfd62719a603bdd5179b09f68b364d9f0aee2bbc0676466c60345d0dc3f"} Jan 23 13:00:27 crc kubenswrapper[4865]: I0123 13:00:27.469271 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 13:00:28 crc kubenswrapper[4865]: I0123 13:00:28.118067 4865 scope.go:117] "RemoveContainer" containerID="893186b97fd348cbc78b5583c9dee4848834f0940caa5a09f42c8b91d8258985" Jan 23 13:00:28 crc kubenswrapper[4865]: I0123 13:00:28.157694 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 13:00:28 crc kubenswrapper[4865]: I0123 13:00:28.279717 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:00:29 crc kubenswrapper[4865]: I0123 13:00:29.167898 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-76c5c47f8f-p49qh" Jan 23 13:00:29 crc kubenswrapper[4865]: I0123 13:00:29.359856 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 13:00:29 crc kubenswrapper[4865]: I0123 13:00:29.359953 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 13:00:29 crc kubenswrapper[4865]: I0123 13:00:29.360722 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"30d3be517958f944291cf17e27e1c020327e5e7efe24145c62c6dbbaacd35043"} pod="openstack/horizon-66f7b94cdb-f7pw2" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 13:00:29 crc kubenswrapper[4865]: I0123 13:00:29.360760 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" containerID="cri-o://30d3be517958f944291cf17e27e1c020327e5e7efe24145c62c6dbbaacd35043" gracePeriod=30 Jan 23 13:00:29 crc kubenswrapper[4865]: I0123 13:00:29.487386 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-7kqtt" event={"ID":"15434cef-8cb6-4386-b761-143f1819cac8","Type":"ContainerStarted","Data":"e4e3c294eaf3786e94d719a0fcf9c4a7851ed5a81f0daa40622e24ae7f8089d3"} Jan 23 13:00:29 crc kubenswrapper[4865]: I0123 13:00:29.734514 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 13:00:30 crc kubenswrapper[4865]: I0123 13:00:30.375436 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:00:30 crc kubenswrapper[4865]: I0123 13:00:30.376225 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" containerID="cri-o://cbd56c19fce379136487197a6ca7b7c1c495ccc639974ad450c02ab717ac42c7" gracePeriod=30 Jan 23 13:00:30 crc kubenswrapper[4865]: I0123 13:00:30.376277 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="proxy-httpd" containerID="cri-o://9b6e695de5e717162ef62eac2c551888f78a635db078d49284199b6bcfb3030d" gracePeriod=30 Jan 23 13:00:30 crc kubenswrapper[4865]: I0123 13:00:30.376300 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="sg-core" containerID="cri-o://33c46b0735e5af2c1e7d667d610ff52f526dbbc2cc720a72f733806394f3c237" gracePeriod=30 Jan 23 13:00:30 crc kubenswrapper[4865]: I0123 13:00:30.377893 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-notification-agent" containerID="cri-o://477c52a280a948025e13a8feda92fd8b614f305fe95984ffd0b1b5641cedd357" gracePeriod=30 Jan 23 13:00:30 crc kubenswrapper[4865]: I0123 13:00:30.507236 4865 generic.go:334] "Generic (PLEG): container finished" podID="c63db198-8ec8-42b1-8211-d207c172706c" containerID="9b6e695de5e717162ef62eac2c551888f78a635db078d49284199b6bcfb3030d" exitCode=0 Jan 23 13:00:30 crc kubenswrapper[4865]: I0123 13:00:30.507465 4865 generic.go:334] "Generic (PLEG): container finished" podID="c63db198-8ec8-42b1-8211-d207c172706c" containerID="33c46b0735e5af2c1e7d667d610ff52f526dbbc2cc720a72f733806394f3c237" exitCode=2 Jan 23 13:00:30 crc kubenswrapper[4865]: I0123 13:00:30.507560 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerDied","Data":"9b6e695de5e717162ef62eac2c551888f78a635db078d49284199b6bcfb3030d"} Jan 23 13:00:30 crc kubenswrapper[4865]: I0123 13:00:30.507670 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerDied","Data":"33c46b0735e5af2c1e7d667d610ff52f526dbbc2cc720a72f733806394f3c237"} Jan 23 13:00:31 crc kubenswrapper[4865]: I0123 13:00:31.519797 4865 generic.go:334] "Generic (PLEG): container finished" podID="c63db198-8ec8-42b1-8211-d207c172706c" containerID="cbd56c19fce379136487197a6ca7b7c1c495ccc639974ad450c02ab717ac42c7" exitCode=0 Jan 23 13:00:31 crc kubenswrapper[4865]: I0123 13:00:31.519938 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerDied","Data":"cbd56c19fce379136487197a6ca7b7c1c495ccc639974ad450c02ab717ac42c7"} Jan 23 13:00:31 crc kubenswrapper[4865]: I0123 13:00:31.520877 4865 scope.go:117] "RemoveContainer" containerID="0eaf574272189462b7fced379e7c324bec419a8b14cb558060d9466a414d00db" Jan 23 13:00:32 crc kubenswrapper[4865]: I0123 13:00:32.674086 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-78f5776895-s7hqg" Jan 23 13:00:33 crc kubenswrapper[4865]: I0123 13:00:33.295538 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:00:33 crc kubenswrapper[4865]: I0123 13:00:33.341778 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8bjkz" Jan 23 13:00:33 crc kubenswrapper[4865]: I0123 13:00:33.545070 4865 generic.go:334] "Generic (PLEG): container finished" podID="c63db198-8ec8-42b1-8211-d207c172706c" containerID="477c52a280a948025e13a8feda92fd8b614f305fe95984ffd0b1b5641cedd357" exitCode=0 Jan 23 13:00:33 crc kubenswrapper[4865]: I0123 13:00:33.545112 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerDied","Data":"477c52a280a948025e13a8feda92fd8b614f305fe95984ffd0b1b5641cedd357"} Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.060739 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.167359 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn2r7\" (UniqueName: \"kubernetes.io/projected/c63db198-8ec8-42b1-8211-d207c172706c-kube-api-access-gn2r7\") pod \"c63db198-8ec8-42b1-8211-d207c172706c\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.167414 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-run-httpd\") pod \"c63db198-8ec8-42b1-8211-d207c172706c\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.167490 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-ceilometer-tls-certs\") pod \"c63db198-8ec8-42b1-8211-d207c172706c\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.167511 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-scripts\") pod \"c63db198-8ec8-42b1-8211-d207c172706c\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.167536 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-sg-core-conf-yaml\") pod \"c63db198-8ec8-42b1-8211-d207c172706c\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.167587 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-combined-ca-bundle\") pod \"c63db198-8ec8-42b1-8211-d207c172706c\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.167659 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-config-data\") pod \"c63db198-8ec8-42b1-8211-d207c172706c\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.167782 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-log-httpd\") pod \"c63db198-8ec8-42b1-8211-d207c172706c\" (UID: \"c63db198-8ec8-42b1-8211-d207c172706c\") " Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.168643 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c63db198-8ec8-42b1-8211-d207c172706c" (UID: "c63db198-8ec8-42b1-8211-d207c172706c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.169087 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c63db198-8ec8-42b1-8211-d207c172706c" (UID: "c63db198-8ec8-42b1-8211-d207c172706c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.174403 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.174454 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c63db198-8ec8-42b1-8211-d207c172706c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.197012 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c63db198-8ec8-42b1-8211-d207c172706c-kube-api-access-gn2r7" (OuterVolumeSpecName: "kube-api-access-gn2r7") pod "c63db198-8ec8-42b1-8211-d207c172706c" (UID: "c63db198-8ec8-42b1-8211-d207c172706c"). InnerVolumeSpecName "kube-api-access-gn2r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.197357 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-scripts" (OuterVolumeSpecName: "scripts") pod "c63db198-8ec8-42b1-8211-d207c172706c" (UID: "c63db198-8ec8-42b1-8211-d207c172706c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.227143 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c63db198-8ec8-42b1-8211-d207c172706c" (UID: "c63db198-8ec8-42b1-8211-d207c172706c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.254354 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "c63db198-8ec8-42b1-8211-d207c172706c" (UID: "c63db198-8ec8-42b1-8211-d207c172706c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.277194 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn2r7\" (UniqueName: \"kubernetes.io/projected/c63db198-8ec8-42b1-8211-d207c172706c-kube-api-access-gn2r7\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.277228 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.277241 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.277250 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.279544 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c63db198-8ec8-42b1-8211-d207c172706c" (UID: "c63db198-8ec8-42b1-8211-d207c172706c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.300340 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-config-data" (OuterVolumeSpecName: "config-data") pod "c63db198-8ec8-42b1-8211-d207c172706c" (UID: "c63db198-8ec8-42b1-8211-d207c172706c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.378811 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.378847 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c63db198-8ec8-42b1-8211-d207c172706c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.558070 4865 generic.go:334] "Generic (PLEG): container finished" podID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerID="30d3be517958f944291cf17e27e1c020327e5e7efe24145c62c6dbbaacd35043" exitCode=0 Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.558178 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerDied","Data":"30d3be517958f944291cf17e27e1c020327e5e7efe24145c62c6dbbaacd35043"} Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.558246 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerStarted","Data":"3b159abcad530536fdd7d6889057fa206bca7748bc5c03ac17748413d7478e82"} Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.558270 4865 scope.go:117] "RemoveContainer" containerID="d4e6d818c5e068d51936524f311b0a4ea0a416bc4ab3fabeff119dbfad8a049e" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.563822 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c63db198-8ec8-42b1-8211-d207c172706c","Type":"ContainerDied","Data":"65100cdd4df99842af0b84774c42260f5499775a796fc462445a64e82467fe4b"} Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.563920 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.656122 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.673432 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683034 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683478 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b" containerName="collect-profiles" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683497 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b" containerName="collect-profiles" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683511 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerName="extract-content" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683518 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerName="extract-content" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683539 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="sg-core" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683545 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="sg-core" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683564 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerName="extract-content" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683570 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerName="extract-content" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683584 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerName="registry-server" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683590 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerName="registry-server" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683617 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerName="registry-server" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683623 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerName="registry-server" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683637 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-notification-agent" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683643 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-notification-agent" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683649 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="proxy-httpd" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683655 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="proxy-httpd" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683671 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683678 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683684 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerName="extract-utilities" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683690 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerName="extract-utilities" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683702 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerName="extract-utilities" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683710 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerName="extract-utilities" Jan 23 13:00:34 crc kubenswrapper[4865]: E0123 13:00:34.683727 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683734 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683939 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="sg-core" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683977 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.683992 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-notification-agent" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.684008 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="54370b2c-cc27-4800-ad14-d61df7b4c73d" containerName="registry-server" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.684024 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="proxy-httpd" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.684051 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="518bbe98-a0a6-4693-babd-dcd94b8897c6" containerName="registry-server" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.684076 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bcfb75f-bd30-4ac2-8fa7-6b1bbf5bd06b" containerName="collect-profiles" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.690670 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="c63db198-8ec8-42b1-8211-d207c172706c" containerName="ceilometer-central-agent" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.692240 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.695697 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.704297 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.704632 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.706510 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.773140 4865 scope.go:117] "RemoveContainer" containerID="cbd56c19fce379136487197a6ca7b7c1c495ccc639974ad450c02ab717ac42c7" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.790124 4865 scope.go:117] "RemoveContainer" containerID="9b6e695de5e717162ef62eac2c551888f78a635db078d49284199b6bcfb3030d" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.791510 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l88qk\" (UniqueName: \"kubernetes.io/projected/086063e3-8b57-4be7-bc3d-cc2b033a86a3-kube-api-access-l88qk\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.791560 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.791614 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-log-httpd\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.791645 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-scripts\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.791695 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.791767 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.791805 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-config-data\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.791834 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-run-httpd\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.816689 4865 scope.go:117] "RemoveContainer" containerID="33c46b0735e5af2c1e7d667d610ff52f526dbbc2cc720a72f733806394f3c237" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.858856 4865 scope.go:117] "RemoveContainer" containerID="477c52a280a948025e13a8feda92fd8b614f305fe95984ffd0b1b5641cedd357" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.893055 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.893114 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-log-httpd\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.893152 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-scripts\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.893196 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.893237 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.893274 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-config-data\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.893305 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-run-httpd\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.893339 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l88qk\" (UniqueName: \"kubernetes.io/projected/086063e3-8b57-4be7-bc3d-cc2b033a86a3-kube-api-access-l88qk\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.894132 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-log-httpd\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.895417 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-run-httpd\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.897632 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.898194 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.900418 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.900543 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-scripts\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.903085 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-config-data\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:34 crc kubenswrapper[4865]: I0123 13:00:34.915731 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l88qk\" (UniqueName: \"kubernetes.io/projected/086063e3-8b57-4be7-bc3d-cc2b033a86a3-kube-api-access-l88qk\") pod \"ceilometer-0\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " pod="openstack/ceilometer-0" Jan 23 13:00:35 crc kubenswrapper[4865]: I0123 13:00:35.037315 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:00:35 crc kubenswrapper[4865]: I0123 13:00:35.573556 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:00:35 crc kubenswrapper[4865]: I0123 13:00:35.588189 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"086063e3-8b57-4be7-bc3d-cc2b033a86a3","Type":"ContainerStarted","Data":"b9dc59de63febdfaafe7ebe365f8b4d976d811223e2a942138cb27b4d0fa9952"} Jan 23 13:00:36 crc kubenswrapper[4865]: I0123 13:00:36.129984 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c63db198-8ec8-42b1-8211-d207c172706c" path="/var/lib/kubelet/pods/c63db198-8ec8-42b1-8211-d207c172706c/volumes" Jan 23 13:00:36 crc kubenswrapper[4865]: I0123 13:00:36.545652 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-9jp5b" Jan 23 13:00:36 crc kubenswrapper[4865]: I0123 13:00:36.555142 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-qftlt" Jan 23 13:00:36 crc kubenswrapper[4865]: I0123 13:00:36.630970 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"086063e3-8b57-4be7-bc3d-cc2b033a86a3","Type":"ContainerStarted","Data":"64b5e70b2cdb6ccd86af23dce3a7dcb0156f0a31e839fb79de76e7d107749259"} Jan 23 13:00:36 crc kubenswrapper[4865]: I0123 13:00:36.631950 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"086063e3-8b57-4be7-bc3d-cc2b033a86a3","Type":"ContainerStarted","Data":"955b016241f7ebfb29acc1826047a252f94fdc9917593c412dcb414ae1b47383"} Jan 23 13:00:36 crc kubenswrapper[4865]: I0123 13:00:36.782343 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-bps6b" Jan 23 13:00:36 crc kubenswrapper[4865]: I0123 13:00:36.871647 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-bqtq9" Jan 23 13:00:36 crc kubenswrapper[4865]: I0123 13:00:36.956317 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-7fdbl" Jan 23 13:00:37 crc kubenswrapper[4865]: I0123 13:00:37.123209 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:00:37 crc kubenswrapper[4865]: E0123 13:00:37.123433 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:00:37 crc kubenswrapper[4865]: I0123 13:00:37.671846 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"086063e3-8b57-4be7-bc3d-cc2b033a86a3","Type":"ContainerStarted","Data":"68be5d4b51286c9963eb5869266edff1fd7186379cfe490e8d66b9b00d510467"} Jan 23 13:00:38 crc kubenswrapper[4865]: I0123 13:00:38.283625 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:00:40 crc kubenswrapper[4865]: I0123 13:00:40.726620 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"086063e3-8b57-4be7-bc3d-cc2b033a86a3","Type":"ContainerStarted","Data":"f37ccd761ae21a05f12b9b43c279e585490ad895d33901aa95c23ad5d8b6b5c1"} Jan 23 13:00:40 crc kubenswrapper[4865]: I0123 13:00:40.726941 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 13:00:40 crc kubenswrapper[4865]: I0123 13:00:40.758181 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.718194949 podStartE2EDuration="6.758163996s" podCreationTimestamp="2026-01-23 13:00:34 +0000 UTC" firstStartedPulling="2026-01-23 13:00:35.582065 +0000 UTC m=+4079.751137226" lastFinishedPulling="2026-01-23 13:00:38.622034047 +0000 UTC m=+4082.791106273" observedRunningTime="2026-01-23 13:00:40.75503898 +0000 UTC m=+4084.924111206" watchObservedRunningTime="2026-01-23 13:00:40.758163996 +0000 UTC m=+4084.927236222" Jan 23 13:00:42 crc kubenswrapper[4865]: I0123 13:00:42.146293 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7df9698d5d-lk94b" Jan 23 13:00:43 crc kubenswrapper[4865]: I0123 13:00:43.305221 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:00:44 crc kubenswrapper[4865]: I0123 13:00:44.355157 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 13:00:44 crc kubenswrapper[4865]: I0123 13:00:44.355195 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 13:00:44 crc kubenswrapper[4865]: I0123 13:00:44.356341 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 13:00:48 crc kubenswrapper[4865]: I0123 13:00:48.118656 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:00:48 crc kubenswrapper[4865]: E0123 13:00:48.119448 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:00:48 crc kubenswrapper[4865]: I0123 13:00:48.328361 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="c728912d-821c-4759-b175-3fd4324ad4f2" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 13:00:48 crc kubenswrapper[4865]: I0123 13:00:48.859256 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 23 13:00:48 crc kubenswrapper[4865]: I0123 13:00:48.991758 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 23 13:00:50 crc kubenswrapper[4865]: I0123 13:00:50.747041 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 23 13:00:51 crc kubenswrapper[4865]: I0123 13:00:51.373197 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 23 13:00:53 crc kubenswrapper[4865]: I0123 13:00:53.375557 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 23 13:00:54 crc kubenswrapper[4865]: I0123 13:00:54.355924 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 13:00:56 crc kubenswrapper[4865]: I0123 13:00:56.058395 4865 scope.go:117] "RemoveContainer" containerID="ded0fc76f555bd38b9f49579107bacabaacd9eb9988c3eecea20fcdd7a7ae28b" Jan 23 13:00:56 crc kubenswrapper[4865]: I0123 13:00:56.154769 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:00:56 crc kubenswrapper[4865]: I0123 13:00:56.155052 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="ceilometer-central-agent" containerID="cri-o://955b016241f7ebfb29acc1826047a252f94fdc9917593c412dcb414ae1b47383" gracePeriod=30 Jan 23 13:00:56 crc kubenswrapper[4865]: I0123 13:00:56.155192 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="sg-core" containerID="cri-o://68be5d4b51286c9963eb5869266edff1fd7186379cfe490e8d66b9b00d510467" gracePeriod=30 Jan 23 13:00:56 crc kubenswrapper[4865]: I0123 13:00:56.155224 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="proxy-httpd" containerID="cri-o://f37ccd761ae21a05f12b9b43c279e585490ad895d33901aa95c23ad5d8b6b5c1" gracePeriod=30 Jan 23 13:00:56 crc kubenswrapper[4865]: I0123 13:00:56.155272 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="ceilometer-notification-agent" containerID="cri-o://64b5e70b2cdb6ccd86af23dce3a7dcb0156f0a31e839fb79de76e7d107749259" gracePeriod=30 Jan 23 13:00:56 crc kubenswrapper[4865]: I0123 13:00:56.857194 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.1.10:3000/\": read tcp 10.217.0.2:43876->10.217.1.10:3000: read: connection reset by peer" Jan 23 13:00:58 crc kubenswrapper[4865]: I0123 13:00:58.930334 4865 generic.go:334] "Generic (PLEG): container finished" podID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerID="f37ccd761ae21a05f12b9b43c279e585490ad895d33901aa95c23ad5d8b6b5c1" exitCode=0 Jan 23 13:00:58 crc kubenswrapper[4865]: I0123 13:00:58.931607 4865 generic.go:334] "Generic (PLEG): container finished" podID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerID="68be5d4b51286c9963eb5869266edff1fd7186379cfe490e8d66b9b00d510467" exitCode=2 Jan 23 13:00:58 crc kubenswrapper[4865]: I0123 13:00:58.931683 4865 generic.go:334] "Generic (PLEG): container finished" podID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerID="64b5e70b2cdb6ccd86af23dce3a7dcb0156f0a31e839fb79de76e7d107749259" exitCode=0 Jan 23 13:00:58 crc kubenswrapper[4865]: I0123 13:00:58.931745 4865 generic.go:334] "Generic (PLEG): container finished" podID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerID="955b016241f7ebfb29acc1826047a252f94fdc9917593c412dcb414ae1b47383" exitCode=0 Jan 23 13:00:58 crc kubenswrapper[4865]: I0123 13:00:58.930412 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"086063e3-8b57-4be7-bc3d-cc2b033a86a3","Type":"ContainerDied","Data":"f37ccd761ae21a05f12b9b43c279e585490ad895d33901aa95c23ad5d8b6b5c1"} Jan 23 13:00:58 crc kubenswrapper[4865]: I0123 13:00:58.931879 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"086063e3-8b57-4be7-bc3d-cc2b033a86a3","Type":"ContainerDied","Data":"68be5d4b51286c9963eb5869266edff1fd7186379cfe490e8d66b9b00d510467"} Jan 23 13:00:58 crc kubenswrapper[4865]: I0123 13:00:58.931948 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"086063e3-8b57-4be7-bc3d-cc2b033a86a3","Type":"ContainerDied","Data":"64b5e70b2cdb6ccd86af23dce3a7dcb0156f0a31e839fb79de76e7d107749259"} Jan 23 13:00:58 crc kubenswrapper[4865]: I0123 13:00:58.932018 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"086063e3-8b57-4be7-bc3d-cc2b033a86a3","Type":"ContainerDied","Data":"955b016241f7ebfb29acc1826047a252f94fdc9917593c412dcb414ae1b47383"} Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.790321 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.909393 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-combined-ca-bundle\") pod \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.909452 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-run-httpd\") pod \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.909491 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-config-data\") pod \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.909651 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-ceilometer-tls-certs\") pod \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.909688 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-sg-core-conf-yaml\") pod \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.909728 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l88qk\" (UniqueName: \"kubernetes.io/projected/086063e3-8b57-4be7-bc3d-cc2b033a86a3-kube-api-access-l88qk\") pod \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.909766 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-log-httpd\") pod \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.909790 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-scripts\") pod \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\" (UID: \"086063e3-8b57-4be7-bc3d-cc2b033a86a3\") " Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.911618 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "086063e3-8b57-4be7-bc3d-cc2b033a86a3" (UID: "086063e3-8b57-4be7-bc3d-cc2b033a86a3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.912434 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "086063e3-8b57-4be7-bc3d-cc2b033a86a3" (UID: "086063e3-8b57-4be7-bc3d-cc2b033a86a3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.941319 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086063e3-8b57-4be7-bc3d-cc2b033a86a3-kube-api-access-l88qk" (OuterVolumeSpecName: "kube-api-access-l88qk") pod "086063e3-8b57-4be7-bc3d-cc2b033a86a3" (UID: "086063e3-8b57-4be7-bc3d-cc2b033a86a3"). InnerVolumeSpecName "kube-api-access-l88qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:00:59 crc kubenswrapper[4865]: I0123 13:00:59.969688 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-scripts" (OuterVolumeSpecName: "scripts") pod "086063e3-8b57-4be7-bc3d-cc2b033a86a3" (UID: "086063e3-8b57-4be7-bc3d-cc2b033a86a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.013306 4865 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.020033 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l88qk\" (UniqueName: \"kubernetes.io/projected/086063e3-8b57-4be7-bc3d-cc2b033a86a3-kube-api-access-l88qk\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.020127 4865 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/086063e3-8b57-4be7-bc3d-cc2b033a86a3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.020194 4865 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.024331 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"086063e3-8b57-4be7-bc3d-cc2b033a86a3","Type":"ContainerDied","Data":"b9dc59de63febdfaafe7ebe365f8b4d976d811223e2a942138cb27b4d0fa9952"} Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.024399 4865 scope.go:117] "RemoveContainer" containerID="f37ccd761ae21a05f12b9b43c279e585490ad895d33901aa95c23ad5d8b6b5c1" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.024561 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.097238 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "086063e3-8b57-4be7-bc3d-cc2b033a86a3" (UID: "086063e3-8b57-4be7-bc3d-cc2b033a86a3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.121880 4865 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.148980 4865 scope.go:117] "RemoveContainer" containerID="68be5d4b51286c9963eb5869266edff1fd7186379cfe490e8d66b9b00d510467" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.157812 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "086063e3-8b57-4be7-bc3d-cc2b033a86a3" (UID: "086063e3-8b57-4be7-bc3d-cc2b033a86a3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.182496 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "086063e3-8b57-4be7-bc3d-cc2b033a86a3" (UID: "086063e3-8b57-4be7-bc3d-cc2b033a86a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.210449 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29486221-m55wk"] Jan 23 13:01:00 crc kubenswrapper[4865]: E0123 13:01:00.211063 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="proxy-httpd" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.211084 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="proxy-httpd" Jan 23 13:01:00 crc kubenswrapper[4865]: E0123 13:01:00.211112 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="sg-core" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.211123 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="sg-core" Jan 23 13:01:00 crc kubenswrapper[4865]: E0123 13:01:00.211168 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="ceilometer-central-agent" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.211177 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="ceilometer-central-agent" Jan 23 13:01:00 crc kubenswrapper[4865]: E0123 13:01:00.211206 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="ceilometer-notification-agent" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.211216 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="ceilometer-notification-agent" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.211442 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="proxy-httpd" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.211466 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="ceilometer-notification-agent" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.211516 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="sg-core" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.211541 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" containerName="ceilometer-central-agent" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.212407 4865 scope.go:117] "RemoveContainer" containerID="64b5e70b2cdb6ccd86af23dce3a7dcb0156f0a31e839fb79de76e7d107749259" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.212435 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.224166 4865 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.224361 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.239571 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486221-m55wk"] Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.268255 4865 scope.go:117] "RemoveContainer" containerID="955b016241f7ebfb29acc1826047a252f94fdc9917593c412dcb414ae1b47383" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.275212 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-config-data" (OuterVolumeSpecName: "config-data") pod "086063e3-8b57-4be7-bc3d-cc2b033a86a3" (UID: "086063e3-8b57-4be7-bc3d-cc2b033a86a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.330129 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-combined-ca-bundle\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.330312 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-fernet-keys\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.330366 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c9f6\" (UniqueName: \"kubernetes.io/projected/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-kube-api-access-4c9f6\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.330499 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-config-data\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.330695 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/086063e3-8b57-4be7-bc3d-cc2b033a86a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.364145 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.370686 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.390390 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.392680 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.396097 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.396253 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.400514 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.418812 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.441816 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.441861 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-combined-ca-bundle\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.441888 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-config-data\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.441929 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.441980 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-fernet-keys\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.442015 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c9f6\" (UniqueName: \"kubernetes.io/projected/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-kube-api-access-4c9f6\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.442051 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.442067 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tk2l\" (UniqueName: \"kubernetes.io/projected/efecb0e3-1734-4098-ac3f-3b4b8957cc40-kube-api-access-9tk2l\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.442083 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efecb0e3-1734-4098-ac3f-3b4b8957cc40-log-httpd\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.442119 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efecb0e3-1734-4098-ac3f-3b4b8957cc40-run-httpd\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.442141 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-config-data\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.442202 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-scripts\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.448274 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-fernet-keys\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.448434 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-combined-ca-bundle\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.452531 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-config-data\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.466452 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c9f6\" (UniqueName: \"kubernetes.io/projected/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-kube-api-access-4c9f6\") pod \"keystone-cron-29486221-m55wk\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.543361 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.543908 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efecb0e3-1734-4098-ac3f-3b4b8957cc40-run-httpd\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.544013 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-scripts\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.544053 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.544094 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-config-data\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.544138 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.544215 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.544255 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tk2l\" (UniqueName: \"kubernetes.io/projected/efecb0e3-1734-4098-ac3f-3b4b8957cc40-kube-api-access-9tk2l\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.544274 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efecb0e3-1734-4098-ac3f-3b4b8957cc40-log-httpd\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.544361 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efecb0e3-1734-4098-ac3f-3b4b8957cc40-run-httpd\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.544653 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efecb0e3-1734-4098-ac3f-3b4b8957cc40-log-httpd\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.550533 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.551316 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.552087 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.552183 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-scripts\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.560563 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tk2l\" (UniqueName: \"kubernetes.io/projected/efecb0e3-1734-4098-ac3f-3b4b8957cc40-kube-api-access-9tk2l\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.564714 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efecb0e3-1734-4098-ac3f-3b4b8957cc40-config-data\") pod \"ceilometer-0\" (UID: \"efecb0e3-1734-4098-ac3f-3b4b8957cc40\") " pod="openstack/ceilometer-0" Jan 23 13:01:00 crc kubenswrapper[4865]: I0123 13:01:00.712318 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 23 13:01:01 crc kubenswrapper[4865]: I0123 13:01:01.645493 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29486221-m55wk"] Jan 23 13:01:01 crc kubenswrapper[4865]: I0123 13:01:01.905006 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 23 13:01:02 crc kubenswrapper[4865]: I0123 13:01:02.099537 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efecb0e3-1734-4098-ac3f-3b4b8957cc40","Type":"ContainerStarted","Data":"dab1bbee82b6e3dd5414272f0c889d050d7b2171cdd80a5f76c99e8eda0023ed"} Jan 23 13:01:02 crc kubenswrapper[4865]: I0123 13:01:02.100558 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486221-m55wk" event={"ID":"8a25ed15-7535-41e4-9f98-f37ac5e9c79f","Type":"ContainerStarted","Data":"235874b910493099f86e83c29c745f1b2db023d4c039ccb69439279ebef48fc3"} Jan 23 13:01:02 crc kubenswrapper[4865]: I0123 13:01:02.119205 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:01:02 crc kubenswrapper[4865]: E0123 13:01:02.119446 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:01:02 crc kubenswrapper[4865]: I0123 13:01:02.151828 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086063e3-8b57-4be7-bc3d-cc2b033a86a3" path="/var/lib/kubelet/pods/086063e3-8b57-4be7-bc3d-cc2b033a86a3/volumes" Jan 23 13:01:03 crc kubenswrapper[4865]: I0123 13:01:03.115175 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486221-m55wk" event={"ID":"8a25ed15-7535-41e4-9f98-f37ac5e9c79f","Type":"ContainerStarted","Data":"d9db09655767ad9bc70f654fec0004e5356ba581b4b13ea861bd6d3089cf365d"} Jan 23 13:01:03 crc kubenswrapper[4865]: I0123 13:01:03.155362 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efecb0e3-1734-4098-ac3f-3b4b8957cc40","Type":"ContainerStarted","Data":"9c3ad7132d4095607125aaf933b17ba9c29b3cc001a1eac0c1c498155a6ce052"} Jan 23 13:01:03 crc kubenswrapper[4865]: I0123 13:01:03.155426 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efecb0e3-1734-4098-ac3f-3b4b8957cc40","Type":"ContainerStarted","Data":"3efa834ef1b23a6960759b97c714ec44c9f6f8c2d36cf7b4db361c2dc5a1acce"} Jan 23 13:01:04 crc kubenswrapper[4865]: I0123 13:01:04.170312 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efecb0e3-1734-4098-ac3f-3b4b8957cc40","Type":"ContainerStarted","Data":"8e47ffa5260397b3f14b48ac7b5bbc4f63b55b72bc070c8e9b8b9486665b061f"} Jan 23 13:01:04 crc kubenswrapper[4865]: I0123 13:01:04.214710 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4g249" Jan 23 13:01:04 crc kubenswrapper[4865]: I0123 13:01:04.245461 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29486221-m55wk" podStartSLOduration=4.245441123 podStartE2EDuration="4.245441123s" podCreationTimestamp="2026-01-23 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 13:01:03.145617878 +0000 UTC m=+4107.314690104" watchObservedRunningTime="2026-01-23 13:01:04.245441123 +0000 UTC m=+4108.414513349" Jan 23 13:01:04 crc kubenswrapper[4865]: I0123 13:01:04.358447 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 13:01:04 crc kubenswrapper[4865]: I0123 13:01:04.358530 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 13:01:04 crc kubenswrapper[4865]: I0123 13:01:04.359357 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"3b159abcad530536fdd7d6889057fa206bca7748bc5c03ac17748413d7478e82"} pod="openstack/horizon-66f7b94cdb-f7pw2" containerMessage="Container horizon failed startup probe, will be restarted" Jan 23 13:01:04 crc kubenswrapper[4865]: I0123 13:01:04.359407 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" containerID="cri-o://3b159abcad530536fdd7d6889057fa206bca7748bc5c03ac17748413d7478e82" gracePeriod=30 Jan 23 13:01:08 crc kubenswrapper[4865]: I0123 13:01:08.240472 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efecb0e3-1734-4098-ac3f-3b4b8957cc40","Type":"ContainerStarted","Data":"a2418b5d59a8c04aa492ab42a04855c058c5ed5a22fd5115b61de3918475f96f"} Jan 23 13:01:08 crc kubenswrapper[4865]: I0123 13:01:08.241053 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 23 13:01:08 crc kubenswrapper[4865]: I0123 13:01:08.260848 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.445438775 podStartE2EDuration="8.260830277s" podCreationTimestamp="2026-01-23 13:01:00 +0000 UTC" firstStartedPulling="2026-01-23 13:01:01.937693778 +0000 UTC m=+4106.106766004" lastFinishedPulling="2026-01-23 13:01:06.75308528 +0000 UTC m=+4110.922157506" observedRunningTime="2026-01-23 13:01:08.258802799 +0000 UTC m=+4112.427875025" watchObservedRunningTime="2026-01-23 13:01:08.260830277 +0000 UTC m=+4112.429902503" Jan 23 13:01:09 crc kubenswrapper[4865]: I0123 13:01:09.250735 4865 generic.go:334] "Generic (PLEG): container finished" podID="8a25ed15-7535-41e4-9f98-f37ac5e9c79f" containerID="d9db09655767ad9bc70f654fec0004e5356ba581b4b13ea861bd6d3089cf365d" exitCode=0 Jan 23 13:01:09 crc kubenswrapper[4865]: I0123 13:01:09.250825 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486221-m55wk" event={"ID":"8a25ed15-7535-41e4-9f98-f37ac5e9c79f","Type":"ContainerDied","Data":"d9db09655767ad9bc70f654fec0004e5356ba581b4b13ea861bd6d3089cf365d"} Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.065210 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.170116 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-config-data\") pod \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.170215 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c9f6\" (UniqueName: \"kubernetes.io/projected/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-kube-api-access-4c9f6\") pod \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.170271 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-fernet-keys\") pod \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.170466 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-combined-ca-bundle\") pod \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.187401 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8a25ed15-7535-41e4-9f98-f37ac5e9c79f" (UID: "8a25ed15-7535-41e4-9f98-f37ac5e9c79f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.187413 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-kube-api-access-4c9f6" (OuterVolumeSpecName: "kube-api-access-4c9f6") pod "8a25ed15-7535-41e4-9f98-f37ac5e9c79f" (UID: "8a25ed15-7535-41e4-9f98-f37ac5e9c79f"). InnerVolumeSpecName "kube-api-access-4c9f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.224252 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a25ed15-7535-41e4-9f98-f37ac5e9c79f" (UID: "8a25ed15-7535-41e4-9f98-f37ac5e9c79f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.275154 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-config-data" (OuterVolumeSpecName: "config-data") pod "8a25ed15-7535-41e4-9f98-f37ac5e9c79f" (UID: "8a25ed15-7535-41e4-9f98-f37ac5e9c79f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.275867 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-config-data\") pod \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\" (UID: \"8a25ed15-7535-41e4-9f98-f37ac5e9c79f\") " Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.276058 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29486221-m55wk" event={"ID":"8a25ed15-7535-41e4-9f98-f37ac5e9c79f","Type":"ContainerDied","Data":"235874b910493099f86e83c29c745f1b2db023d4c039ccb69439279ebef48fc3"} Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.276163 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235874b910493099f86e83c29c745f1b2db023d4c039ccb69439279ebef48fc3" Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.276101 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29486221-m55wk" Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.276798 4865 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:11 crc kubenswrapper[4865]: W0123 13:01:11.277808 4865 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/8a25ed15-7535-41e4-9f98-f37ac5e9c79f/volumes/kubernetes.io~secret/config-data Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.277834 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-config-data" (OuterVolumeSpecName: "config-data") pod "8a25ed15-7535-41e4-9f98-f37ac5e9c79f" (UID: "8a25ed15-7535-41e4-9f98-f37ac5e9c79f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.278448 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c9f6\" (UniqueName: \"kubernetes.io/projected/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-kube-api-access-4c9f6\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.278480 4865 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:11 crc kubenswrapper[4865]: I0123 13:01:11.380250 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a25ed15-7535-41e4-9f98-f37ac5e9c79f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:01:14 crc kubenswrapper[4865]: I0123 13:01:14.118375 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:01:14 crc kubenswrapper[4865]: E0123 13:01:14.120094 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:01:26 crc kubenswrapper[4865]: I0123 13:01:26.139921 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:01:26 crc kubenswrapper[4865]: E0123 13:01:26.140764 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:01:30 crc kubenswrapper[4865]: I0123 13:01:30.952045 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 23 13:01:35 crc kubenswrapper[4865]: I0123 13:01:35.603709 4865 generic.go:334] "Generic (PLEG): container finished" podID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerID="3b159abcad530536fdd7d6889057fa206bca7748bc5c03ac17748413d7478e82" exitCode=137 Jan 23 13:01:35 crc kubenswrapper[4865]: I0123 13:01:35.604381 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerDied","Data":"3b159abcad530536fdd7d6889057fa206bca7748bc5c03ac17748413d7478e82"} Jan 23 13:01:35 crc kubenswrapper[4865]: I0123 13:01:35.604423 4865 scope.go:117] "RemoveContainer" containerID="30d3be517958f944291cf17e27e1c020327e5e7efe24145c62c6dbbaacd35043" Jan 23 13:01:36 crc kubenswrapper[4865]: I0123 13:01:36.615451 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66f7b94cdb-f7pw2" event={"ID":"98cc6a2c-601d-49ae-8d9c-da49869b3639","Type":"ContainerStarted","Data":"562ae10570a795049fa3049cd08560fe3656248e4b2c2e7928f869718638ecd1"} Jan 23 13:01:37 crc kubenswrapper[4865]: I0123 13:01:37.120340 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:01:37 crc kubenswrapper[4865]: E0123 13:01:37.120852 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:01:44 crc kubenswrapper[4865]: I0123 13:01:44.354790 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 13:01:44 crc kubenswrapper[4865]: I0123 13:01:44.355317 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 13:01:50 crc kubenswrapper[4865]: I0123 13:01:50.119050 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:01:50 crc kubenswrapper[4865]: E0123 13:01:50.120107 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:01:54 crc kubenswrapper[4865]: I0123 13:01:54.357025 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-66f7b94cdb-f7pw2" podUID="98cc6a2c-601d-49ae-8d9c-da49869b3639" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 23 13:02:03 crc kubenswrapper[4865]: I0123 13:02:03.118195 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:02:03 crc kubenswrapper[4865]: I0123 13:02:03.893063 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"568f85f9a877cd9bc7e372931afd084e77c203a199212974dad83d97bdb02d8e"} Jan 23 13:02:06 crc kubenswrapper[4865]: I0123 13:02:06.553788 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 13:02:08 crc kubenswrapper[4865]: I0123 13:02:08.257976 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-66f7b94cdb-f7pw2" Jan 23 13:04:18 crc kubenswrapper[4865]: I0123 13:04:18.776534 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:04:18 crc kubenswrapper[4865]: I0123 13:04:18.777387 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:04:48 crc kubenswrapper[4865]: I0123 13:04:48.776754 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:04:48 crc kubenswrapper[4865]: I0123 13:04:48.777354 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:05:18 crc kubenswrapper[4865]: I0123 13:05:18.776405 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:05:18 crc kubenswrapper[4865]: I0123 13:05:18.776996 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:05:18 crc kubenswrapper[4865]: I0123 13:05:18.777046 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 13:05:18 crc kubenswrapper[4865]: I0123 13:05:18.779410 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"568f85f9a877cd9bc7e372931afd084e77c203a199212974dad83d97bdb02d8e"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 13:05:18 crc kubenswrapper[4865]: I0123 13:05:18.779487 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://568f85f9a877cd9bc7e372931afd084e77c203a199212974dad83d97bdb02d8e" gracePeriod=600 Jan 23 13:05:19 crc kubenswrapper[4865]: E0123 13:05:19.141313 4865 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1884154e_d0e9_4dc1_81b8_dd8e7f9b5b3b.slice/crio-conmon-568f85f9a877cd9bc7e372931afd084e77c203a199212974dad83d97bdb02d8e.scope\": RecentStats: unable to find data in memory cache]" Jan 23 13:05:19 crc kubenswrapper[4865]: I0123 13:05:19.774711 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="568f85f9a877cd9bc7e372931afd084e77c203a199212974dad83d97bdb02d8e" exitCode=0 Jan 23 13:05:19 crc kubenswrapper[4865]: I0123 13:05:19.775276 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"568f85f9a877cd9bc7e372931afd084e77c203a199212974dad83d97bdb02d8e"} Jan 23 13:05:19 crc kubenswrapper[4865]: I0123 13:05:19.775306 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerStarted","Data":"1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d"} Jan 23 13:05:19 crc kubenswrapper[4865]: I0123 13:05:19.775325 4865 scope.go:117] "RemoveContainer" containerID="e3bf9170ab26055735bee421f22e5dc65904029c59b4d8ff7887201d93ca7606" Jan 23 13:05:56 crc kubenswrapper[4865]: I0123 13:05:56.985819 4865 scope.go:117] "RemoveContainer" containerID="209a105b4626c03551b1a98e527566480a963aaf424e3d3ab9af992f67a30ece" Jan 23 13:05:57 crc kubenswrapper[4865]: I0123 13:05:57.015713 4865 scope.go:117] "RemoveContainer" containerID="9c9f53fb5b751f1033af191143537af0ff8bd25bc8516d1312401fa61f5f4d17" Jan 23 13:05:57 crc kubenswrapper[4865]: I0123 13:05:57.035463 4865 scope.go:117] "RemoveContainer" containerID="2e611903b6440fec75ec8cd4cc97040cb502aad45f91320971e5e02966249d6d" Jan 23 13:07:10 crc kubenswrapper[4865]: I0123 13:07:10.931880 4865 generic.go:334] "Generic (PLEG): container finished" podID="62cb5904-6543-42ea-8a83-ba0681efa497" containerID="4818e5edbc7d9dde4943aa4dab788cb6fb3f331fd6138efa3cd900edf294a61e" exitCode=1 Jan 23 13:07:10 crc kubenswrapper[4865]: I0123 13:07:10.931979 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"62cb5904-6543-42ea-8a83-ba0681efa497","Type":"ContainerDied","Data":"4818e5edbc7d9dde4943aa4dab788cb6fb3f331fd6138efa3cd900edf294a61e"} Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.546647 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.717574 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ca-certs\") pod \"62cb5904-6543-42ea-8a83-ba0681efa497\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.717876 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgfhs\" (UniqueName: \"kubernetes.io/projected/62cb5904-6543-42ea-8a83-ba0681efa497-kube-api-access-wgfhs\") pod \"62cb5904-6543-42ea-8a83-ba0681efa497\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.717913 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config\") pod \"62cb5904-6543-42ea-8a83-ba0681efa497\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.717935 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ssh-key\") pod \"62cb5904-6543-42ea-8a83-ba0681efa497\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.717985 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-temporary\") pod \"62cb5904-6543-42ea-8a83-ba0681efa497\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.718031 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"62cb5904-6543-42ea-8a83-ba0681efa497\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.718152 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-workdir\") pod \"62cb5904-6543-42ea-8a83-ba0681efa497\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.718226 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config-secret\") pod \"62cb5904-6543-42ea-8a83-ba0681efa497\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.718275 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-config-data\") pod \"62cb5904-6543-42ea-8a83-ba0681efa497\" (UID: \"62cb5904-6543-42ea-8a83-ba0681efa497\") " Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.719702 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "62cb5904-6543-42ea-8a83-ba0681efa497" (UID: "62cb5904-6543-42ea-8a83-ba0681efa497"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.720258 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-config-data" (OuterVolumeSpecName: "config-data") pod "62cb5904-6543-42ea-8a83-ba0681efa497" (UID: "62cb5904-6543-42ea-8a83-ba0681efa497"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.724392 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "62cb5904-6543-42ea-8a83-ba0681efa497" (UID: "62cb5904-6543-42ea-8a83-ba0681efa497"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.724623 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62cb5904-6543-42ea-8a83-ba0681efa497-kube-api-access-wgfhs" (OuterVolumeSpecName: "kube-api-access-wgfhs") pod "62cb5904-6543-42ea-8a83-ba0681efa497" (UID: "62cb5904-6543-42ea-8a83-ba0681efa497"). InnerVolumeSpecName "kube-api-access-wgfhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.751304 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "62cb5904-6543-42ea-8a83-ba0681efa497" (UID: "62cb5904-6543-42ea-8a83-ba0681efa497"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.758272 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "62cb5904-6543-42ea-8a83-ba0681efa497" (UID: "62cb5904-6543-42ea-8a83-ba0681efa497"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.760691 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "62cb5904-6543-42ea-8a83-ba0681efa497" (UID: "62cb5904-6543-42ea-8a83-ba0681efa497"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.781784 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "62cb5904-6543-42ea-8a83-ba0681efa497" (UID: "62cb5904-6543-42ea-8a83-ba0681efa497"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.790192 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "62cb5904-6543-42ea-8a83-ba0681efa497" (UID: "62cb5904-6543-42ea-8a83-ba0681efa497"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.820723 4865 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.821847 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgfhs\" (UniqueName: \"kubernetes.io/projected/62cb5904-6543-42ea-8a83-ba0681efa497-kube-api-access-wgfhs\") on node \"crc\" DevicePath \"\"" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.821886 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.821896 4865 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.821906 4865 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.821944 4865 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.821954 4865 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/62cb5904-6543-42ea-8a83-ba0681efa497-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.821966 4865 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/62cb5904-6543-42ea-8a83-ba0681efa497-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.821993 4865 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/62cb5904-6543-42ea-8a83-ba0681efa497-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.843630 4865 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.924445 4865 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.954042 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"62cb5904-6543-42ea-8a83-ba0681efa497","Type":"ContainerDied","Data":"dc7189744acafb4f684614bef2c05b2cd2646b6460fe5b8077521aa37d937f77"} Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.954090 4865 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc7189744acafb4f684614bef2c05b2cd2646b6460fe5b8077521aa37d937f77" Jan 23 13:07:12 crc kubenswrapper[4865]: I0123 13:07:12.954157 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.282430 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 13:07:15 crc kubenswrapper[4865]: E0123 13:07:15.283371 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a25ed15-7535-41e4-9f98-f37ac5e9c79f" containerName="keystone-cron" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.283389 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a25ed15-7535-41e4-9f98-f37ac5e9c79f" containerName="keystone-cron" Jan 23 13:07:15 crc kubenswrapper[4865]: E0123 13:07:15.283422 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cb5904-6543-42ea-8a83-ba0681efa497" containerName="tempest-tests-tempest-tests-runner" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.283430 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cb5904-6543-42ea-8a83-ba0681efa497" containerName="tempest-tests-tempest-tests-runner" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.283642 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a25ed15-7535-41e4-9f98-f37ac5e9c79f" containerName="keystone-cron" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.283690 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cb5904-6543-42ea-8a83-ba0681efa497" containerName="tempest-tests-tempest-tests-runner" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.284408 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.287173 4865 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rsw8g" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.298350 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.469157 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjktp\" (UniqueName: \"kubernetes.io/projected/59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c-kube-api-access-xjktp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.469231 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.571475 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjktp\" (UniqueName: \"kubernetes.io/projected/59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c-kube-api-access-xjktp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.571526 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 13:07:15 crc kubenswrapper[4865]: I0123 13:07:15.572617 4865 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 13:07:16 crc kubenswrapper[4865]: I0123 13:07:16.068735 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjktp\" (UniqueName: \"kubernetes.io/projected/59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c-kube-api-access-xjktp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 13:07:16 crc kubenswrapper[4865]: I0123 13:07:16.232498 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 13:07:16 crc kubenswrapper[4865]: I0123 13:07:16.505107 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 23 13:07:16 crc kubenswrapper[4865]: I0123 13:07:16.978222 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 23 13:07:16 crc kubenswrapper[4865]: I0123 13:07:16.991204 4865 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 13:07:16 crc kubenswrapper[4865]: I0123 13:07:16.992491 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c","Type":"ContainerStarted","Data":"61e8090c16700c9d87d577b8510193b9eba4003d65b05df6f36d1d1b47b9c32d"} Jan 23 13:07:20 crc kubenswrapper[4865]: I0123 13:07:20.018357 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"59f7a53a-4cf9-42de-b7b6-feaf3c92ad0c","Type":"ContainerStarted","Data":"3e9d4db50e94918795615827fa598ca33ebda2298d6a1aef7b449e95277d0102"} Jan 23 13:07:20 crc kubenswrapper[4865]: I0123 13:07:20.038843 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.913777004 podStartE2EDuration="5.038821169s" podCreationTimestamp="2026-01-23 13:07:15 +0000 UTC" firstStartedPulling="2026-01-23 13:07:16.989320773 +0000 UTC m=+4481.158392999" lastFinishedPulling="2026-01-23 13:07:19.114364928 +0000 UTC m=+4483.283437164" observedRunningTime="2026-01-23 13:07:20.033354247 +0000 UTC m=+4484.202426473" watchObservedRunningTime="2026-01-23 13:07:20.038821169 +0000 UTC m=+4484.207893405" Jan 23 13:07:48 crc kubenswrapper[4865]: I0123 13:07:48.776142 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:07:48 crc kubenswrapper[4865]: I0123 13:07:48.776724 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:08:18 crc kubenswrapper[4865]: I0123 13:08:18.776271 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:08:18 crc kubenswrapper[4865]: I0123 13:08:18.776849 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:08:48 crc kubenswrapper[4865]: I0123 13:08:48.776328 4865 patch_prober.go:28] interesting pod/machine-config-daemon-sgp5m container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 13:08:48 crc kubenswrapper[4865]: I0123 13:08:48.776894 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 13:08:48 crc kubenswrapper[4865]: I0123 13:08:48.776968 4865 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" Jan 23 13:08:48 crc kubenswrapper[4865]: I0123 13:08:48.777802 4865 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d"} pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 13:08:48 crc kubenswrapper[4865]: I0123 13:08:48.777855 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerName="machine-config-daemon" containerID="cri-o://1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d" gracePeriod=600 Jan 23 13:08:49 crc kubenswrapper[4865]: E0123 13:08:49.401557 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:08:49 crc kubenswrapper[4865]: I0123 13:08:49.857869 4865 generic.go:334] "Generic (PLEG): container finished" podID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" containerID="1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d" exitCode=0 Jan 23 13:08:49 crc kubenswrapper[4865]: I0123 13:08:49.857909 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" event={"ID":"1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b","Type":"ContainerDied","Data":"1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d"} Jan 23 13:08:49 crc kubenswrapper[4865]: I0123 13:08:49.857988 4865 scope.go:117] "RemoveContainer" containerID="568f85f9a877cd9bc7e372931afd084e77c203a199212974dad83d97bdb02d8e" Jan 23 13:08:49 crc kubenswrapper[4865]: I0123 13:08:49.859216 4865 scope.go:117] "RemoveContainer" containerID="1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d" Jan 23 13:08:49 crc kubenswrapper[4865]: E0123 13:08:49.859523 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:09:01 crc kubenswrapper[4865]: I0123 13:09:01.118402 4865 scope.go:117] "RemoveContainer" containerID="1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d" Jan 23 13:09:01 crc kubenswrapper[4865]: E0123 13:09:01.119222 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.237383 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7t5dj"] Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.240434 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.259799 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7t5dj"] Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.375911 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-utilities\") pod \"redhat-operators-7t5dj\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.376014 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-catalog-content\") pod \"redhat-operators-7t5dj\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.376141 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8vgj\" (UniqueName: \"kubernetes.io/projected/903ebfa4-f8a8-498b-aceb-8b5aa292da18-kube-api-access-x8vgj\") pod \"redhat-operators-7t5dj\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.477918 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-catalog-content\") pod \"redhat-operators-7t5dj\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.478008 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8vgj\" (UniqueName: \"kubernetes.io/projected/903ebfa4-f8a8-498b-aceb-8b5aa292da18-kube-api-access-x8vgj\") pod \"redhat-operators-7t5dj\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.478096 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-utilities\") pod \"redhat-operators-7t5dj\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.478552 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-utilities\") pod \"redhat-operators-7t5dj\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.478679 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-catalog-content\") pod \"redhat-operators-7t5dj\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.501535 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8vgj\" (UniqueName: \"kubernetes.io/projected/903ebfa4-f8a8-498b-aceb-8b5aa292da18-kube-api-access-x8vgj\") pod \"redhat-operators-7t5dj\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:09 crc kubenswrapper[4865]: I0123 13:09:09.560910 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:10 crc kubenswrapper[4865]: I0123 13:09:10.032051 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7t5dj"] Jan 23 13:09:10 crc kubenswrapper[4865]: I0123 13:09:10.056038 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t5dj" event={"ID":"903ebfa4-f8a8-498b-aceb-8b5aa292da18","Type":"ContainerStarted","Data":"968c5f5ef72755de559e1b881948e8c74735cf590b0ffc690fec2dfda2c7203c"} Jan 23 13:09:11 crc kubenswrapper[4865]: I0123 13:09:11.067263 4865 generic.go:334] "Generic (PLEG): container finished" podID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerID="f3fe99f5c807b47b14c2721029e28dd9892566b090145b8259529470347c23aa" exitCode=0 Jan 23 13:09:11 crc kubenswrapper[4865]: I0123 13:09:11.067297 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t5dj" event={"ID":"903ebfa4-f8a8-498b-aceb-8b5aa292da18","Type":"ContainerDied","Data":"f3fe99f5c807b47b14c2721029e28dd9892566b090145b8259529470347c23aa"} Jan 23 13:09:13 crc kubenswrapper[4865]: I0123 13:09:13.096255 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t5dj" event={"ID":"903ebfa4-f8a8-498b-aceb-8b5aa292da18","Type":"ContainerStarted","Data":"caaecd517a8c14e7ba5d1cd7cc51c86ccd966556845ec4dd4b33a809a54885f7"} Jan 23 13:09:15 crc kubenswrapper[4865]: I0123 13:09:15.118693 4865 scope.go:117] "RemoveContainer" containerID="1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d" Jan 23 13:09:15 crc kubenswrapper[4865]: E0123 13:09:15.119390 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:09:19 crc kubenswrapper[4865]: I0123 13:09:19.152419 4865 generic.go:334] "Generic (PLEG): container finished" podID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerID="caaecd517a8c14e7ba5d1cd7cc51c86ccd966556845ec4dd4b33a809a54885f7" exitCode=0 Jan 23 13:09:19 crc kubenswrapper[4865]: I0123 13:09:19.152505 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t5dj" event={"ID":"903ebfa4-f8a8-498b-aceb-8b5aa292da18","Type":"ContainerDied","Data":"caaecd517a8c14e7ba5d1cd7cc51c86ccd966556845ec4dd4b33a809a54885f7"} Jan 23 13:09:20 crc kubenswrapper[4865]: I0123 13:09:20.176371 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t5dj" event={"ID":"903ebfa4-f8a8-498b-aceb-8b5aa292da18","Type":"ContainerStarted","Data":"8232bb9e55edeb9dfb6e00a696d0c8bc34b9381af3156afa24f66bd5f5186dd1"} Jan 23 13:09:20 crc kubenswrapper[4865]: I0123 13:09:20.205987 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7t5dj" podStartSLOduration=2.727687738 podStartE2EDuration="11.205973218s" podCreationTimestamp="2026-01-23 13:09:09 +0000 UTC" firstStartedPulling="2026-01-23 13:09:11.06906442 +0000 UTC m=+4595.238136646" lastFinishedPulling="2026-01-23 13:09:19.5473499 +0000 UTC m=+4603.716422126" observedRunningTime="2026-01-23 13:09:20.20233163 +0000 UTC m=+4604.371403846" watchObservedRunningTime="2026-01-23 13:09:20.205973218 +0000 UTC m=+4604.375045444" Jan 23 13:09:26 crc kubenswrapper[4865]: I0123 13:09:26.125553 4865 scope.go:117] "RemoveContainer" containerID="1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d" Jan 23 13:09:26 crc kubenswrapper[4865]: E0123 13:09:26.126410 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:09:29 crc kubenswrapper[4865]: I0123 13:09:29.561857 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:29 crc kubenswrapper[4865]: I0123 13:09:29.563039 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:30 crc kubenswrapper[4865]: I0123 13:09:30.616719 4865 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7t5dj" podUID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerName="registry-server" probeResult="failure" output=< Jan 23 13:09:30 crc kubenswrapper[4865]: timeout: failed to connect service ":50051" within 1s Jan 23 13:09:30 crc kubenswrapper[4865]: > Jan 23 13:09:39 crc kubenswrapper[4865]: I0123 13:09:39.654021 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:39 crc kubenswrapper[4865]: I0123 13:09:39.720486 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:40 crc kubenswrapper[4865]: I0123 13:09:40.117705 4865 scope.go:117] "RemoveContainer" containerID="1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d" Jan 23 13:09:40 crc kubenswrapper[4865]: E0123 13:09:40.117911 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:09:40 crc kubenswrapper[4865]: I0123 13:09:40.446031 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7t5dj"] Jan 23 13:09:41 crc kubenswrapper[4865]: I0123 13:09:41.394360 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7t5dj" podUID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerName="registry-server" containerID="cri-o://8232bb9e55edeb9dfb6e00a696d0c8bc34b9381af3156afa24f66bd5f5186dd1" gracePeriod=2 Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.414399 4865 generic.go:334] "Generic (PLEG): container finished" podID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerID="8232bb9e55edeb9dfb6e00a696d0c8bc34b9381af3156afa24f66bd5f5186dd1" exitCode=0 Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.414503 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t5dj" event={"ID":"903ebfa4-f8a8-498b-aceb-8b5aa292da18","Type":"ContainerDied","Data":"8232bb9e55edeb9dfb6e00a696d0c8bc34b9381af3156afa24f66bd5f5186dd1"} Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.578096 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.705531 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8vgj\" (UniqueName: \"kubernetes.io/projected/903ebfa4-f8a8-498b-aceb-8b5aa292da18-kube-api-access-x8vgj\") pod \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.705638 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-utilities\") pod \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.705737 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-catalog-content\") pod \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\" (UID: \"903ebfa4-f8a8-498b-aceb-8b5aa292da18\") " Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.706479 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-utilities" (OuterVolumeSpecName: "utilities") pod "903ebfa4-f8a8-498b-aceb-8b5aa292da18" (UID: "903ebfa4-f8a8-498b-aceb-8b5aa292da18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.717282 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/903ebfa4-f8a8-498b-aceb-8b5aa292da18-kube-api-access-x8vgj" (OuterVolumeSpecName: "kube-api-access-x8vgj") pod "903ebfa4-f8a8-498b-aceb-8b5aa292da18" (UID: "903ebfa4-f8a8-498b-aceb-8b5aa292da18"). InnerVolumeSpecName "kube-api-access-x8vgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.808406 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8vgj\" (UniqueName: \"kubernetes.io/projected/903ebfa4-f8a8-498b-aceb-8b5aa292da18-kube-api-access-x8vgj\") on node \"crc\" DevicePath \"\"" Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.808449 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.852383 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "903ebfa4-f8a8-498b-aceb-8b5aa292da18" (UID: "903ebfa4-f8a8-498b-aceb-8b5aa292da18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:09:42 crc kubenswrapper[4865]: I0123 13:09:42.910532 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903ebfa4-f8a8-498b-aceb-8b5aa292da18-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:09:43 crc kubenswrapper[4865]: I0123 13:09:43.428062 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7t5dj" event={"ID":"903ebfa4-f8a8-498b-aceb-8b5aa292da18","Type":"ContainerDied","Data":"968c5f5ef72755de559e1b881948e8c74735cf590b0ffc690fec2dfda2c7203c"} Jan 23 13:09:43 crc kubenswrapper[4865]: I0123 13:09:43.428373 4865 scope.go:117] "RemoveContainer" containerID="8232bb9e55edeb9dfb6e00a696d0c8bc34b9381af3156afa24f66bd5f5186dd1" Jan 23 13:09:43 crc kubenswrapper[4865]: I0123 13:09:43.428312 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7t5dj" Jan 23 13:09:43 crc kubenswrapper[4865]: I0123 13:09:43.462377 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7t5dj"] Jan 23 13:09:43 crc kubenswrapper[4865]: I0123 13:09:43.467090 4865 scope.go:117] "RemoveContainer" containerID="caaecd517a8c14e7ba5d1cd7cc51c86ccd966556845ec4dd4b33a809a54885f7" Jan 23 13:09:43 crc kubenswrapper[4865]: I0123 13:09:43.474179 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7t5dj"] Jan 23 13:09:43 crc kubenswrapper[4865]: I0123 13:09:43.487476 4865 scope.go:117] "RemoveContainer" containerID="f3fe99f5c807b47b14c2721029e28dd9892566b090145b8259529470347c23aa" Jan 23 13:09:44 crc kubenswrapper[4865]: I0123 13:09:44.130397 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" path="/var/lib/kubelet/pods/903ebfa4-f8a8-498b-aceb-8b5aa292da18/volumes" Jan 23 13:09:45 crc kubenswrapper[4865]: I0123 13:09:45.860925 4865 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gn7d2"] Jan 23 13:09:45 crc kubenswrapper[4865]: E0123 13:09:45.861396 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerName="extract-content" Jan 23 13:09:45 crc kubenswrapper[4865]: I0123 13:09:45.861410 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerName="extract-content" Jan 23 13:09:45 crc kubenswrapper[4865]: E0123 13:09:45.861436 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerName="registry-server" Jan 23 13:09:45 crc kubenswrapper[4865]: I0123 13:09:45.861444 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerName="registry-server" Jan 23 13:09:45 crc kubenswrapper[4865]: E0123 13:09:45.861461 4865 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerName="extract-utilities" Jan 23 13:09:45 crc kubenswrapper[4865]: I0123 13:09:45.861469 4865 state_mem.go:107] "Deleted CPUSet assignment" podUID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerName="extract-utilities" Jan 23 13:09:45 crc kubenswrapper[4865]: I0123 13:09:45.861707 4865 memory_manager.go:354] "RemoveStaleState removing state" podUID="903ebfa4-f8a8-498b-aceb-8b5aa292da18" containerName="registry-server" Jan 23 13:09:45 crc kubenswrapper[4865]: I0123 13:09:45.863380 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:45 crc kubenswrapper[4865]: I0123 13:09:45.877653 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn7d2"] Jan 23 13:09:45 crc kubenswrapper[4865]: I0123 13:09:45.963904 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-catalog-content\") pod \"redhat-marketplace-gn7d2\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:45 crc kubenswrapper[4865]: I0123 13:09:45.963989 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swmzs\" (UniqueName: \"kubernetes.io/projected/48ca34bd-2861-4503-89b5-df31ef0462ea-kube-api-access-swmzs\") pod \"redhat-marketplace-gn7d2\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:45 crc kubenswrapper[4865]: I0123 13:09:45.964396 4865 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-utilities\") pod \"redhat-marketplace-gn7d2\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:46 crc kubenswrapper[4865]: I0123 13:09:46.066467 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-catalog-content\") pod \"redhat-marketplace-gn7d2\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:46 crc kubenswrapper[4865]: I0123 13:09:46.066520 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swmzs\" (UniqueName: \"kubernetes.io/projected/48ca34bd-2861-4503-89b5-df31ef0462ea-kube-api-access-swmzs\") pod \"redhat-marketplace-gn7d2\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:46 crc kubenswrapper[4865]: I0123 13:09:46.066638 4865 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-utilities\") pod \"redhat-marketplace-gn7d2\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:46 crc kubenswrapper[4865]: I0123 13:09:46.067049 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-utilities\") pod \"redhat-marketplace-gn7d2\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:46 crc kubenswrapper[4865]: I0123 13:09:46.067261 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-catalog-content\") pod \"redhat-marketplace-gn7d2\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:46 crc kubenswrapper[4865]: I0123 13:09:46.087484 4865 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swmzs\" (UniqueName: \"kubernetes.io/projected/48ca34bd-2861-4503-89b5-df31ef0462ea-kube-api-access-swmzs\") pod \"redhat-marketplace-gn7d2\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:46 crc kubenswrapper[4865]: I0123 13:09:46.188109 4865 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:46 crc kubenswrapper[4865]: I0123 13:09:46.720830 4865 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn7d2"] Jan 23 13:09:47 crc kubenswrapper[4865]: I0123 13:09:47.472674 4865 generic.go:334] "Generic (PLEG): container finished" podID="48ca34bd-2861-4503-89b5-df31ef0462ea" containerID="1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1" exitCode=0 Jan 23 13:09:47 crc kubenswrapper[4865]: I0123 13:09:47.472930 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn7d2" event={"ID":"48ca34bd-2861-4503-89b5-df31ef0462ea","Type":"ContainerDied","Data":"1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1"} Jan 23 13:09:47 crc kubenswrapper[4865]: I0123 13:09:47.472953 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn7d2" event={"ID":"48ca34bd-2861-4503-89b5-df31ef0462ea","Type":"ContainerStarted","Data":"923caceea31960ba4babddc4549e18132b599128a0b1aaab278d2f7a8766eb9b"} Jan 23 13:09:49 crc kubenswrapper[4865]: I0123 13:09:49.506131 4865 generic.go:334] "Generic (PLEG): container finished" podID="48ca34bd-2861-4503-89b5-df31ef0462ea" containerID="2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405" exitCode=0 Jan 23 13:09:49 crc kubenswrapper[4865]: I0123 13:09:49.506234 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn7d2" event={"ID":"48ca34bd-2861-4503-89b5-df31ef0462ea","Type":"ContainerDied","Data":"2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405"} Jan 23 13:09:50 crc kubenswrapper[4865]: I0123 13:09:50.532955 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn7d2" event={"ID":"48ca34bd-2861-4503-89b5-df31ef0462ea","Type":"ContainerStarted","Data":"6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c"} Jan 23 13:09:50 crc kubenswrapper[4865]: I0123 13:09:50.563318 4865 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gn7d2" podStartSLOduration=3.136163604 podStartE2EDuration="5.563294538s" podCreationTimestamp="2026-01-23 13:09:45 +0000 UTC" firstStartedPulling="2026-01-23 13:09:47.475715621 +0000 UTC m=+4631.644787847" lastFinishedPulling="2026-01-23 13:09:49.902846565 +0000 UTC m=+4634.071918781" observedRunningTime="2026-01-23 13:09:50.550620552 +0000 UTC m=+4634.719692798" watchObservedRunningTime="2026-01-23 13:09:50.563294538 +0000 UTC m=+4634.732366764" Jan 23 13:09:55 crc kubenswrapper[4865]: I0123 13:09:55.118654 4865 scope.go:117] "RemoveContainer" containerID="1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d" Jan 23 13:09:55 crc kubenswrapper[4865]: E0123 13:09:55.119530 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:09:56 crc kubenswrapper[4865]: I0123 13:09:56.188776 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:56 crc kubenswrapper[4865]: I0123 13:09:56.189140 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:56 crc kubenswrapper[4865]: I0123 13:09:56.249888 4865 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:56 crc kubenswrapper[4865]: I0123 13:09:56.645616 4865 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:56 crc kubenswrapper[4865]: I0123 13:09:56.696256 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn7d2"] Jan 23 13:09:58 crc kubenswrapper[4865]: I0123 13:09:58.613256 4865 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gn7d2" podUID="48ca34bd-2861-4503-89b5-df31ef0462ea" containerName="registry-server" containerID="cri-o://6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c" gracePeriod=2 Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.129152 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.243514 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-catalog-content\") pod \"48ca34bd-2861-4503-89b5-df31ef0462ea\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.243727 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-utilities\") pod \"48ca34bd-2861-4503-89b5-df31ef0462ea\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.243916 4865 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swmzs\" (UniqueName: \"kubernetes.io/projected/48ca34bd-2861-4503-89b5-df31ef0462ea-kube-api-access-swmzs\") pod \"48ca34bd-2861-4503-89b5-df31ef0462ea\" (UID: \"48ca34bd-2861-4503-89b5-df31ef0462ea\") " Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.244753 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-utilities" (OuterVolumeSpecName: "utilities") pod "48ca34bd-2861-4503-89b5-df31ef0462ea" (UID: "48ca34bd-2861-4503-89b5-df31ef0462ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.250246 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ca34bd-2861-4503-89b5-df31ef0462ea-kube-api-access-swmzs" (OuterVolumeSpecName: "kube-api-access-swmzs") pod "48ca34bd-2861-4503-89b5-df31ef0462ea" (UID: "48ca34bd-2861-4503-89b5-df31ef0462ea"). InnerVolumeSpecName "kube-api-access-swmzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.297743 4865 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48ca34bd-2861-4503-89b5-df31ef0462ea" (UID: "48ca34bd-2861-4503-89b5-df31ef0462ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.346825 4865 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.347067 4865 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48ca34bd-2861-4503-89b5-df31ef0462ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.347181 4865 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swmzs\" (UniqueName: \"kubernetes.io/projected/48ca34bd-2861-4503-89b5-df31ef0462ea-kube-api-access-swmzs\") on node \"crc\" DevicePath \"\"" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.626020 4865 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gn7d2" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.626025 4865 generic.go:334] "Generic (PLEG): container finished" podID="48ca34bd-2861-4503-89b5-df31ef0462ea" containerID="6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c" exitCode=0 Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.626051 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn7d2" event={"ID":"48ca34bd-2861-4503-89b5-df31ef0462ea","Type":"ContainerDied","Data":"6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c"} Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.630115 4865 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gn7d2" event={"ID":"48ca34bd-2861-4503-89b5-df31ef0462ea","Type":"ContainerDied","Data":"923caceea31960ba4babddc4549e18132b599128a0b1aaab278d2f7a8766eb9b"} Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.630160 4865 scope.go:117] "RemoveContainer" containerID="6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.675504 4865 scope.go:117] "RemoveContainer" containerID="2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.676174 4865 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn7d2"] Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.687527 4865 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gn7d2"] Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.700490 4865 scope.go:117] "RemoveContainer" containerID="1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.765937 4865 scope.go:117] "RemoveContainer" containerID="6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c" Jan 23 13:09:59 crc kubenswrapper[4865]: E0123 13:09:59.785289 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c\": container with ID starting with 6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c not found: ID does not exist" containerID="6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.785350 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c"} err="failed to get container status \"6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c\": rpc error: code = NotFound desc = could not find container \"6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c\": container with ID starting with 6421f32e90ba78433a217c591e749665ea6e8234cdd416b5c1be0613b128654c not found: ID does not exist" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.785384 4865 scope.go:117] "RemoveContainer" containerID="2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405" Jan 23 13:09:59 crc kubenswrapper[4865]: E0123 13:09:59.786179 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405\": container with ID starting with 2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405 not found: ID does not exist" containerID="2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.786241 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405"} err="failed to get container status \"2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405\": rpc error: code = NotFound desc = could not find container \"2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405\": container with ID starting with 2bc59c72039ef5c19a7ad5c590d32fa98ae812964e18df3649b6e04171f39405 not found: ID does not exist" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.786277 4865 scope.go:117] "RemoveContainer" containerID="1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1" Jan 23 13:09:59 crc kubenswrapper[4865]: E0123 13:09:59.791022 4865 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1\": container with ID starting with 1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1 not found: ID does not exist" containerID="1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1" Jan 23 13:09:59 crc kubenswrapper[4865]: I0123 13:09:59.791086 4865 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1"} err="failed to get container status \"1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1\": rpc error: code = NotFound desc = could not find container \"1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1\": container with ID starting with 1144d7b0ff40843d5799bcbf19d9b546cf9be13f3f4805200793691558358df1 not found: ID does not exist" Jan 23 13:10:00 crc kubenswrapper[4865]: I0123 13:10:00.129041 4865 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ca34bd-2861-4503-89b5-df31ef0462ea" path="/var/lib/kubelet/pods/48ca34bd-2861-4503-89b5-df31ef0462ea/volumes" Jan 23 13:10:09 crc kubenswrapper[4865]: I0123 13:10:09.118291 4865 scope.go:117] "RemoveContainer" containerID="1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d" Jan 23 13:10:09 crc kubenswrapper[4865]: E0123 13:10:09.119187 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:10:18 crc kubenswrapper[4865]: I0123 13:10:18.470702 4865 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-9fl7w" podUID="e92ddc14-bdb6-4407-b8a3-047079030166" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.71:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:10:23 crc kubenswrapper[4865]: I0123 13:10:23.118947 4865 scope.go:117] "RemoveContainer" containerID="1a8a71e4b084726e7d31cd02faaae88a02f48f76fcee2cf2325575cdd33dd22d" Jan 23 13:10:23 crc kubenswrapper[4865]: E0123 13:10:23.119718 4865 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sgp5m_openshift-machine-config-operator(1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b)\"" pod="openshift-machine-config-operator/machine-config-daemon-sgp5m" podUID="1884154e-d0e9-4dc1-81b8-dd8e7f9b5b3b" Jan 23 13:10:24 crc kubenswrapper[4865]: I0123 13:10:24.828808 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-gh89m" podUID="9faffae5-73bb-4980-8092-b79a6888476d" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 13:10:25 crc kubenswrapper[4865]: I0123 13:10:25.772909 4865 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="efecb0e3-1734-4098-ac3f-3b4b8957cc40" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out"